2026-03-07 00:00:06.494813 | Job console starting 2026-03-07 00:00:06.525052 | Updating git repos 2026-03-07 00:00:06.627748 | Cloning repos into workspace 2026-03-07 00:00:07.023299 | Restoring repo states 2026-03-07 00:00:07.041683 | Merging changes 2026-03-07 00:00:07.041707 | Checking out repos 2026-03-07 00:00:07.563959 | Preparing playbooks 2026-03-07 00:00:08.615670 | Running Ansible setup 2026-03-07 00:00:16.680951 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-07 00:00:19.546146 | 2026-03-07 00:00:19.546261 | PLAY [Base pre] 2026-03-07 00:00:19.564396 | 2026-03-07 00:00:19.564508 | TASK [Setup log path fact] 2026-03-07 00:00:19.612652 | orchestrator | ok 2026-03-07 00:00:19.636264 | 2026-03-07 00:00:19.636389 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-07 00:00:19.669754 | orchestrator | ok 2026-03-07 00:00:19.684150 | 2026-03-07 00:00:19.684249 | TASK [emit-job-header : Print job information] 2026-03-07 00:00:19.730021 | # Job Information 2026-03-07 00:00:19.730155 | Ansible Version: 2.16.14 2026-03-07 00:00:19.730184 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-07 00:00:19.730212 | Pipeline: periodic-midnight 2026-03-07 00:00:19.730232 | Executor: 521e9411259a 2026-03-07 00:00:19.730250 | Triggered by: https://github.com/osism/testbed 2026-03-07 00:00:19.730268 | Event ID: 16d29647e22242fe8869806ad52757f6 2026-03-07 00:00:19.735558 | 2026-03-07 00:00:19.735641 | LOOP [emit-job-header : Print node information] 2026-03-07 00:00:19.876617 | orchestrator | ok: 2026-03-07 00:00:19.876803 | orchestrator | # Node Information 2026-03-07 00:00:19.876835 | orchestrator | Inventory Hostname: orchestrator 2026-03-07 00:00:19.876857 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-07 00:00:19.876876 | orchestrator | Username: zuul-testbed01 2026-03-07 00:00:19.876893 | orchestrator | Distro: Debian 12.13 2026-03-07 00:00:19.876913 | orchestrator | Provider: static-testbed 2026-03-07 00:00:19.876930 | orchestrator | Region: 2026-03-07 00:00:19.877144 | orchestrator | Label: testbed-orchestrator 2026-03-07 00:00:19.877678 | orchestrator | Product Name: OpenStack Nova 2026-03-07 00:00:19.877945 | orchestrator | Interface IP: 81.163.193.140 2026-03-07 00:00:19.896688 | 2026-03-07 00:00:19.896783 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-07 00:00:20.958180 | orchestrator -> localhost | changed 2026-03-07 00:00:20.965528 | 2026-03-07 00:00:20.965622 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-07 00:00:22.956248 | orchestrator -> localhost | changed 2026-03-07 00:00:22.967418 | 2026-03-07 00:00:22.967515 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-07 00:00:23.582237 | orchestrator -> localhost | ok 2026-03-07 00:00:23.587801 | 2026-03-07 00:00:23.587892 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-07 00:00:23.624979 | orchestrator | ok 2026-03-07 00:00:23.650729 | orchestrator | included: /var/lib/zuul/builds/9bd0fd9b250244a9b798ae8004b80082/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-07 00:00:23.673491 | 2026-03-07 00:00:23.673585 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-07 00:00:30.922163 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-07 00:00:30.922332 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/9bd0fd9b250244a9b798ae8004b80082/work/9bd0fd9b250244a9b798ae8004b80082_id_rsa 2026-03-07 00:00:30.922363 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/9bd0fd9b250244a9b798ae8004b80082/work/9bd0fd9b250244a9b798ae8004b80082_id_rsa.pub 2026-03-07 00:00:30.922385 | orchestrator -> localhost | The key fingerprint is: 2026-03-07 00:00:30.922408 | orchestrator -> localhost | SHA256:2bmvq6YfnLr5859HUbx9NWSeHaEW8LtcZUT/BV/rusE zuul-build-sshkey 2026-03-07 00:00:30.922427 | orchestrator -> localhost | The key's randomart image is: 2026-03-07 00:00:30.922454 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-07 00:00:30.922473 | orchestrator -> localhost | | ...oB=| 2026-03-07 00:00:30.922491 | orchestrator -> localhost | | . ==@| 2026-03-07 00:00:30.922508 | orchestrator -> localhost | | + =%| 2026-03-07 00:00:30.922524 | orchestrator -> localhost | | o .. +o*| 2026-03-07 00:00:30.922541 | orchestrator -> localhost | | S o . +o| 2026-03-07 00:00:30.922559 | orchestrator -> localhost | | . . .o = | 2026-03-07 00:00:30.922576 | orchestrator -> localhost | | + . E | 2026-03-07 00:00:30.922593 | orchestrator -> localhost | | oo. . .+ | 2026-03-07 00:00:30.922609 | orchestrator -> localhost | | =*+++++o | 2026-03-07 00:00:30.922625 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-07 00:00:30.922663 | orchestrator -> localhost | ok: Runtime: 0:00:05.875118 2026-03-07 00:00:30.929830 | 2026-03-07 00:00:30.929916 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-07 00:00:30.967581 | orchestrator | ok 2026-03-07 00:00:30.986329 | orchestrator | included: /var/lib/zuul/builds/9bd0fd9b250244a9b798ae8004b80082/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-07 00:00:31.005916 | 2026-03-07 00:00:31.006008 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-07 00:00:31.062188 | orchestrator | skipping: Conditional result was False 2026-03-07 00:00:31.068520 | 2026-03-07 00:00:31.068607 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-07 00:00:32.035708 | orchestrator | changed 2026-03-07 00:00:32.044981 | 2026-03-07 00:00:32.045105 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-07 00:00:32.377819 | orchestrator | ok 2026-03-07 00:00:32.388280 | 2026-03-07 00:00:32.388373 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-07 00:00:32.931357 | orchestrator | ok 2026-03-07 00:00:32.937053 | 2026-03-07 00:00:32.937152 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-07 00:00:33.482342 | orchestrator | ok 2026-03-07 00:00:33.488555 | 2026-03-07 00:00:33.488640 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-07 00:00:33.542315 | orchestrator | skipping: Conditional result was False 2026-03-07 00:00:33.549666 | 2026-03-07 00:00:33.549752 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-07 00:00:34.567899 | orchestrator -> localhost | changed 2026-03-07 00:00:34.589463 | 2026-03-07 00:00:34.589581 | TASK [add-build-sshkey : Add back temp key] 2026-03-07 00:00:35.581520 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/9bd0fd9b250244a9b798ae8004b80082/work/9bd0fd9b250244a9b798ae8004b80082_id_rsa (zuul-build-sshkey) 2026-03-07 00:00:35.581790 | orchestrator -> localhost | ok: Runtime: 0:00:00.061994 2026-03-07 00:00:35.590486 | 2026-03-07 00:00:35.590571 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-07 00:00:36.407104 | orchestrator | ok 2026-03-07 00:00:36.412101 | 2026-03-07 00:00:36.412187 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-07 00:00:36.470314 | orchestrator | skipping: Conditional result was False 2026-03-07 00:00:36.547716 | 2026-03-07 00:00:36.547809 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-07 00:00:37.114805 | orchestrator | ok 2026-03-07 00:00:37.134135 | 2026-03-07 00:00:37.134232 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-07 00:00:37.197856 | orchestrator | ok 2026-03-07 00:00:37.212592 | 2026-03-07 00:00:37.212685 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-07 00:00:37.714437 | orchestrator -> localhost | ok 2026-03-07 00:00:37.720761 | 2026-03-07 00:00:37.720853 | TASK [validate-host : Collect information about the host] 2026-03-07 00:00:39.301214 | orchestrator | ok 2026-03-07 00:00:39.320742 | 2026-03-07 00:00:39.320851 | TASK [validate-host : Sanitize hostname] 2026-03-07 00:00:39.469330 | orchestrator | ok 2026-03-07 00:00:39.473956 | 2026-03-07 00:00:39.474060 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-07 00:00:41.032526 | orchestrator -> localhost | changed 2026-03-07 00:00:41.037758 | 2026-03-07 00:00:41.037837 | TASK [validate-host : Collect information about zuul worker] 2026-03-07 00:00:41.595406 | orchestrator | ok 2026-03-07 00:00:41.599752 | 2026-03-07 00:00:41.599836 | TASK [validate-host : Write out all zuul information for each host] 2026-03-07 00:00:42.583667 | orchestrator -> localhost | changed 2026-03-07 00:00:42.595091 | 2026-03-07 00:00:42.595177 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-07 00:00:42.916865 | orchestrator | ok 2026-03-07 00:00:42.921827 | 2026-03-07 00:00:42.921913 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-07 00:02:04.669574 | orchestrator | changed: 2026-03-07 00:02:04.669817 | orchestrator | .d..t...... src/ 2026-03-07 00:02:04.669852 | orchestrator | .d..t...... src/github.com/ 2026-03-07 00:02:04.669877 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-07 00:02:04.669899 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-07 00:02:04.669920 | orchestrator | RedHat.yml 2026-03-07 00:02:04.684327 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-07 00:02:04.684345 | orchestrator | RedHat.yml 2026-03-07 00:02:04.684399 | orchestrator | = 1.53.0"... 2026-03-07 00:02:15.921079 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-07 00:02:16.050503 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-07 00:02:16.582196 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-07 00:02:16.646857 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-07 00:02:17.369795 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-07 00:02:17.433197 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-07 00:02:18.435436 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-07 00:02:18.435498 | orchestrator | 2026-03-07 00:02:18.435504 | orchestrator | Providers are signed by their developers. 2026-03-07 00:02:18.435509 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-07 00:02:18.435514 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-07 00:02:18.435521 | orchestrator | 2026-03-07 00:02:18.435526 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-07 00:02:18.435530 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-07 00:02:18.435539 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-07 00:02:18.435543 | orchestrator | you run "tofu init" in the future. 2026-03-07 00:02:18.435547 | orchestrator | 2026-03-07 00:02:18.435551 | orchestrator | OpenTofu has been successfully initialized! 2026-03-07 00:02:18.435555 | orchestrator | 2026-03-07 00:02:18.435559 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-07 00:02:18.435563 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-07 00:02:18.435567 | orchestrator | should now work. 2026-03-07 00:02:18.435571 | orchestrator | 2026-03-07 00:02:18.435575 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-07 00:02:18.435579 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-07 00:02:18.435583 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-07 00:02:18.621812 | orchestrator | Created and switched to workspace "ci"! 2026-03-07 00:02:18.621874 | orchestrator | 2026-03-07 00:02:18.621881 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-07 00:02:18.621889 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-07 00:02:18.621894 | orchestrator | for this configuration. 2026-03-07 00:02:18.795623 | orchestrator | ci.auto.tfvars 2026-03-07 00:02:18.802039 | orchestrator | default_custom.tf 2026-03-07 00:02:20.354464 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-07 00:02:20.935925 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-07 00:02:21.163688 | orchestrator | 2026-03-07 00:02:21.163769 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-07 00:02:21.163782 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-07 00:02:21.163790 | orchestrator | + create 2026-03-07 00:02:21.163797 | orchestrator | <= read (data resources) 2026-03-07 00:02:21.163805 | orchestrator | 2026-03-07 00:02:21.163813 | orchestrator | OpenTofu will perform the following actions: 2026-03-07 00:02:21.163828 | orchestrator | 2026-03-07 00:02:21.163836 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-07 00:02:21.163843 | orchestrator | # (config refers to values not yet known) 2026-03-07 00:02:21.163850 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-07 00:02:21.163858 | orchestrator | + checksum = (known after apply) 2026-03-07 00:02:21.163865 | orchestrator | + created_at = (known after apply) 2026-03-07 00:02:21.163871 | orchestrator | + file = (known after apply) 2026-03-07 00:02:21.163878 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.163905 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.163912 | orchestrator | + min_disk_gb = (known after apply) 2026-03-07 00:02:21.163920 | orchestrator | + min_ram_mb = (known after apply) 2026-03-07 00:02:21.163926 | orchestrator | + most_recent = true 2026-03-07 00:02:21.163934 | orchestrator | + name = (known after apply) 2026-03-07 00:02:21.163940 | orchestrator | + protected = (known after apply) 2026-03-07 00:02:21.163947 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.163957 | orchestrator | + schema = (known after apply) 2026-03-07 00:02:21.163964 | orchestrator | + size_bytes = (known after apply) 2026-03-07 00:02:21.163971 | orchestrator | + tags = (known after apply) 2026-03-07 00:02:21.163978 | orchestrator | + updated_at = (known after apply) 2026-03-07 00:02:21.163985 | orchestrator | } 2026-03-07 00:02:21.163994 | orchestrator | 2026-03-07 00:02:21.164019 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-07 00:02:21.164026 | orchestrator | # (config refers to values not yet known) 2026-03-07 00:02:21.164032 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-07 00:02:21.164039 | orchestrator | + checksum = (known after apply) 2026-03-07 00:02:21.164045 | orchestrator | + created_at = (known after apply) 2026-03-07 00:02:21.164052 | orchestrator | + file = (known after apply) 2026-03-07 00:02:21.164059 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.164066 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.164074 | orchestrator | + min_disk_gb = (known after apply) 2026-03-07 00:02:21.164081 | orchestrator | + min_ram_mb = (known after apply) 2026-03-07 00:02:21.164088 | orchestrator | + most_recent = true 2026-03-07 00:02:21.164095 | orchestrator | + name = (known after apply) 2026-03-07 00:02:21.164102 | orchestrator | + protected = (known after apply) 2026-03-07 00:02:21.164109 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.164116 | orchestrator | + schema = (known after apply) 2026-03-07 00:02:21.164123 | orchestrator | + size_bytes = (known after apply) 2026-03-07 00:02:21.164130 | orchestrator | + tags = (known after apply) 2026-03-07 00:02:21.164137 | orchestrator | + updated_at = (known after apply) 2026-03-07 00:02:21.164144 | orchestrator | } 2026-03-07 00:02:21.164150 | orchestrator | 2026-03-07 00:02:21.164157 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-07 00:02:21.164165 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-07 00:02:21.164172 | orchestrator | + content = (known after apply) 2026-03-07 00:02:21.164180 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-07 00:02:21.164187 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-07 00:02:21.164194 | orchestrator | + content_md5 = (known after apply) 2026-03-07 00:02:21.164201 | orchestrator | + content_sha1 = (known after apply) 2026-03-07 00:02:21.164208 | orchestrator | + content_sha256 = (known after apply) 2026-03-07 00:02:21.164215 | orchestrator | + content_sha512 = (known after apply) 2026-03-07 00:02:21.164222 | orchestrator | + directory_permission = "0777" 2026-03-07 00:02:21.164229 | orchestrator | + file_permission = "0644" 2026-03-07 00:02:21.164235 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-07 00:02:21.164242 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.164249 | orchestrator | } 2026-03-07 00:02:21.164258 | orchestrator | 2026-03-07 00:02:21.164266 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-07 00:02:21.164273 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-07 00:02:21.164280 | orchestrator | + content = (known after apply) 2026-03-07 00:02:21.164287 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-07 00:02:21.164294 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-07 00:02:21.164301 | orchestrator | + content_md5 = (known after apply) 2026-03-07 00:02:21.164308 | orchestrator | + content_sha1 = (known after apply) 2026-03-07 00:02:21.164316 | orchestrator | + content_sha256 = (known after apply) 2026-03-07 00:02:21.164323 | orchestrator | + content_sha512 = (known after apply) 2026-03-07 00:02:21.164329 | orchestrator | + directory_permission = "0777" 2026-03-07 00:02:21.164336 | orchestrator | + file_permission = "0644" 2026-03-07 00:02:21.164348 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-07 00:02:21.164355 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.164362 | orchestrator | } 2026-03-07 00:02:21.164369 | orchestrator | 2026-03-07 00:02:21.164382 | orchestrator | # local_file.inventory will be created 2026-03-07 00:02:21.164389 | orchestrator | + resource "local_file" "inventory" { 2026-03-07 00:02:21.164396 | orchestrator | + content = (known after apply) 2026-03-07 00:02:21.164403 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-07 00:02:21.164410 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-07 00:02:21.164417 | orchestrator | + content_md5 = (known after apply) 2026-03-07 00:02:21.164423 | orchestrator | + content_sha1 = (known after apply) 2026-03-07 00:02:21.164431 | orchestrator | + content_sha256 = (known after apply) 2026-03-07 00:02:21.164438 | orchestrator | + content_sha512 = (known after apply) 2026-03-07 00:02:21.164445 | orchestrator | + directory_permission = "0777" 2026-03-07 00:02:21.164452 | orchestrator | + file_permission = "0644" 2026-03-07 00:02:21.164459 | orchestrator | + filename = "inventory.ci" 2026-03-07 00:02:21.164466 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.164472 | orchestrator | } 2026-03-07 00:02:21.164481 | orchestrator | 2026-03-07 00:02:21.164488 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-07 00:02:21.164495 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-07 00:02:21.164502 | orchestrator | + content = (sensitive value) 2026-03-07 00:02:21.164509 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-07 00:02:21.164516 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-07 00:02:21.164523 | orchestrator | + content_md5 = (known after apply) 2026-03-07 00:02:21.164530 | orchestrator | + content_sha1 = (known after apply) 2026-03-07 00:02:21.164537 | orchestrator | + content_sha256 = (known after apply) 2026-03-07 00:02:21.164543 | orchestrator | + content_sha512 = (known after apply) 2026-03-07 00:02:21.164550 | orchestrator | + directory_permission = "0700" 2026-03-07 00:02:21.164557 | orchestrator | + file_permission = "0600" 2026-03-07 00:02:21.164563 | orchestrator | + filename = ".id_rsa.ci" 2026-03-07 00:02:21.164570 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.164577 | orchestrator | } 2026-03-07 00:02:21.164584 | orchestrator | 2026-03-07 00:02:21.164591 | orchestrator | # null_resource.node_semaphore will be created 2026-03-07 00:02:21.164598 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-07 00:02:21.164605 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.164612 | orchestrator | } 2026-03-07 00:02:21.164619 | orchestrator | 2026-03-07 00:02:21.164627 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-07 00:02:21.164634 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-07 00:02:21.164640 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.164647 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.164654 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.164661 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.164668 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.164675 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-07 00:02:21.164682 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.164689 | orchestrator | + size = 80 2026-03-07 00:02:21.164697 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.164704 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.164711 | orchestrator | } 2026-03-07 00:02:21.164720 | orchestrator | 2026-03-07 00:02:21.164727 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-07 00:02:21.164734 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:21.164741 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.164748 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.164754 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.164767 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.164774 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.164781 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-07 00:02:21.164788 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.164795 | orchestrator | + size = 80 2026-03-07 00:02:21.164802 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.164809 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.164816 | orchestrator | } 2026-03-07 00:02:21.164823 | orchestrator | 2026-03-07 00:02:21.164830 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-07 00:02:21.164837 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:21.164844 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.164851 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.164858 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.164865 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.164872 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.164879 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-07 00:02:21.164886 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.164893 | orchestrator | + size = 80 2026-03-07 00:02:21.164900 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.164907 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.164914 | orchestrator | } 2026-03-07 00:02:21.164921 | orchestrator | 2026-03-07 00:02:21.164928 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-07 00:02:21.164935 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:21.164942 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.164949 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.164956 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.164963 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.164970 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.164977 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-07 00:02:21.164983 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.164990 | orchestrator | + size = 80 2026-03-07 00:02:21.165026 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.165035 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.165042 | orchestrator | } 2026-03-07 00:02:21.165051 | orchestrator | 2026-03-07 00:02:21.165058 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-07 00:02:21.165064 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:21.165071 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.165078 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.165084 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.165091 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.165098 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.165108 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-07 00:02:21.165115 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.165122 | orchestrator | + size = 80 2026-03-07 00:02:21.165129 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.165136 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.165143 | orchestrator | } 2026-03-07 00:02:21.165150 | orchestrator | 2026-03-07 00:02:21.165157 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-07 00:02:21.165164 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:21.165171 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.165178 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.165185 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.165196 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.165203 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.165210 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-07 00:02:21.165217 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.165224 | orchestrator | + size = 80 2026-03-07 00:02:21.165231 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.165238 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.165245 | orchestrator | } 2026-03-07 00:02:21.165252 | orchestrator | 2026-03-07 00:02:21.165259 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-07 00:02:21.165266 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:21.165273 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.165279 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.165286 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.165293 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.165300 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.165308 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-07 00:02:21.165315 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.165322 | orchestrator | + size = 80 2026-03-07 00:02:21.165329 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.165336 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.165343 | orchestrator | } 2026-03-07 00:02:21.165351 | orchestrator | 2026-03-07 00:02:21.165358 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-07 00:02:21.165366 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:21.165373 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.165380 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.165387 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.165394 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.165402 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-07 00:02:21.165409 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.165416 | orchestrator | + size = 20 2026-03-07 00:02:21.165423 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.165430 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.165437 | orchestrator | } 2026-03-07 00:02:21.165444 | orchestrator | 2026-03-07 00:02:21.165451 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-07 00:02:21.165458 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:21.165465 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.165472 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.165480 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.165487 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.165494 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-07 00:02:21.165501 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.165508 | orchestrator | + size = 20 2026-03-07 00:02:21.165515 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.165522 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.165529 | orchestrator | } 2026-03-07 00:02:21.165535 | orchestrator | 2026-03-07 00:02:21.165543 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-07 00:02:21.165549 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:21.165556 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.165563 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.165570 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.165577 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.165584 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-07 00:02:21.165592 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.165603 | orchestrator | + size = 20 2026-03-07 00:02:21.165610 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.165617 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.165624 | orchestrator | } 2026-03-07 00:02:21.165631 | orchestrator | 2026-03-07 00:02:21.165638 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-07 00:02:21.165645 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:21.165652 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.165659 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.165666 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.165673 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.165680 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-07 00:02:21.165687 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.165694 | orchestrator | + size = 20 2026-03-07 00:02:21.165701 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.165708 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.165715 | orchestrator | } 2026-03-07 00:02:21.165724 | orchestrator | 2026-03-07 00:02:21.165731 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-07 00:02:21.165738 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:21.165745 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.165752 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.165759 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.165766 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.165773 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-07 00:02:21.165780 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.165791 | orchestrator | + size = 20 2026-03-07 00:02:21.165798 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.165805 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.165812 | orchestrator | } 2026-03-07 00:02:21.165818 | orchestrator | 2026-03-07 00:02:21.165825 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-07 00:02:21.165832 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:21.165839 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.165846 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.165853 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.165860 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.165867 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-07 00:02:21.165874 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.165881 | orchestrator | + size = 20 2026-03-07 00:02:21.165888 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.165895 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.165902 | orchestrator | } 2026-03-07 00:02:21.165909 | orchestrator | 2026-03-07 00:02:21.165916 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-07 00:02:21.165923 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:21.165930 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.165937 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.165944 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.165951 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.165958 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-07 00:02:21.165965 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.165971 | orchestrator | + size = 20 2026-03-07 00:02:21.165978 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.165985 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.165992 | orchestrator | } 2026-03-07 00:02:21.166014 | orchestrator | 2026-03-07 00:02:21.166045 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-07 00:02:21.166052 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:21.166064 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.166071 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.166078 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.166085 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.166092 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-07 00:02:21.166099 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.166106 | orchestrator | + size = 20 2026-03-07 00:02:21.166113 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.166120 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.166127 | orchestrator | } 2026-03-07 00:02:21.166136 | orchestrator | 2026-03-07 00:02:21.166143 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-07 00:02:21.166150 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:21.166157 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:21.166164 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.166171 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.166178 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:21.166184 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-07 00:02:21.166191 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.166198 | orchestrator | + size = 20 2026-03-07 00:02:21.166205 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:21.166212 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:21.166219 | orchestrator | } 2026-03-07 00:02:21.166226 | orchestrator | 2026-03-07 00:02:21.166233 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-07 00:02:21.166240 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-07 00:02:21.166248 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:21.166255 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:21.166262 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:21.166269 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.166276 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.166283 | orchestrator | + config_drive = true 2026-03-07 00:02:21.166290 | orchestrator | + created = (known after apply) 2026-03-07 00:02:21.166298 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:21.166305 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-07 00:02:21.166311 | orchestrator | + force_delete = false 2026-03-07 00:02:21.166318 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:21.166325 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.166332 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.166339 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:21.166346 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:21.166353 | orchestrator | + name = "testbed-manager" 2026-03-07 00:02:21.166360 | orchestrator | + power_state = "active" 2026-03-07 00:02:21.166367 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.166374 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:21.166381 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:21.166388 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:21.166395 | orchestrator | + user_data = (sensitive value) 2026-03-07 00:02:21.166402 | orchestrator | 2026-03-07 00:02:21.166409 | orchestrator | + block_device { 2026-03-07 00:02:21.166416 | orchestrator | + boot_index = 0 2026-03-07 00:02:21.166423 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:21.166437 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:21.166445 | orchestrator | + multiattach = false 2026-03-07 00:02:21.166452 | orchestrator | + source_type = "volume" 2026-03-07 00:02:21.166459 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.166471 | orchestrator | } 2026-03-07 00:02:21.166478 | orchestrator | 2026-03-07 00:02:21.166485 | orchestrator | + network { 2026-03-07 00:02:21.166492 | orchestrator | + access_network = false 2026-03-07 00:02:21.166499 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:21.166506 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:21.166513 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:21.166519 | orchestrator | + name = (known after apply) 2026-03-07 00:02:21.166526 | orchestrator | + port = (known after apply) 2026-03-07 00:02:21.166533 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.166541 | orchestrator | } 2026-03-07 00:02:21.166548 | orchestrator | } 2026-03-07 00:02:21.166557 | orchestrator | 2026-03-07 00:02:21.166564 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-07 00:02:21.166571 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:21.166578 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:21.166585 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:21.166592 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:21.166599 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.166606 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.166613 | orchestrator | + config_drive = true 2026-03-07 00:02:21.166620 | orchestrator | + created = (known after apply) 2026-03-07 00:02:21.166627 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:21.166634 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:21.166641 | orchestrator | + force_delete = false 2026-03-07 00:02:21.166648 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:21.166655 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.166662 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.166669 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:21.166676 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:21.166683 | orchestrator | + name = "testbed-node-0" 2026-03-07 00:02:21.166690 | orchestrator | + power_state = "active" 2026-03-07 00:02:21.166697 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.166705 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:21.166712 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:21.166719 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:21.166725 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:21.166733 | orchestrator | 2026-03-07 00:02:21.166740 | orchestrator | + block_device { 2026-03-07 00:02:21.166747 | orchestrator | + boot_index = 0 2026-03-07 00:02:21.166754 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:21.166761 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:21.166768 | orchestrator | + multiattach = false 2026-03-07 00:02:21.166774 | orchestrator | + source_type = "volume" 2026-03-07 00:02:21.166782 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.166789 | orchestrator | } 2026-03-07 00:02:21.166796 | orchestrator | 2026-03-07 00:02:21.166803 | orchestrator | + network { 2026-03-07 00:02:21.166810 | orchestrator | + access_network = false 2026-03-07 00:02:21.166817 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:21.166824 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:21.166831 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:21.166838 | orchestrator | + name = (known after apply) 2026-03-07 00:02:21.166845 | orchestrator | + port = (known after apply) 2026-03-07 00:02:21.166852 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.166859 | orchestrator | } 2026-03-07 00:02:21.166866 | orchestrator | } 2026-03-07 00:02:21.166875 | orchestrator | 2026-03-07 00:02:21.166882 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-07 00:02:21.166889 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:21.166896 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:21.166908 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:21.166915 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:21.166922 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.166929 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.166936 | orchestrator | + config_drive = true 2026-03-07 00:02:21.166944 | orchestrator | + created = (known after apply) 2026-03-07 00:02:21.166951 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:21.166958 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:21.166964 | orchestrator | + force_delete = false 2026-03-07 00:02:21.166971 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:21.166978 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.166985 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.166992 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:21.167043 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:21.167052 | orchestrator | + name = "testbed-node-1" 2026-03-07 00:02:21.167059 | orchestrator | + power_state = "active" 2026-03-07 00:02:21.167066 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.167073 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:21.167080 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:21.167087 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:21.167093 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:21.167101 | orchestrator | 2026-03-07 00:02:21.167108 | orchestrator | + block_device { 2026-03-07 00:02:21.167115 | orchestrator | + boot_index = 0 2026-03-07 00:02:21.167122 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:21.167129 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:21.167136 | orchestrator | + multiattach = false 2026-03-07 00:02:21.167142 | orchestrator | + source_type = "volume" 2026-03-07 00:02:21.167149 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.167156 | orchestrator | } 2026-03-07 00:02:21.167163 | orchestrator | 2026-03-07 00:02:21.167170 | orchestrator | + network { 2026-03-07 00:02:21.167177 | orchestrator | + access_network = false 2026-03-07 00:02:21.167184 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:21.167190 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:21.167197 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:21.167204 | orchestrator | + name = (known after apply) 2026-03-07 00:02:21.167210 | orchestrator | + port = (known after apply) 2026-03-07 00:02:21.167217 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.167224 | orchestrator | } 2026-03-07 00:02:21.167231 | orchestrator | } 2026-03-07 00:02:21.167241 | orchestrator | 2026-03-07 00:02:21.167248 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-07 00:02:21.167255 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:21.167262 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:21.167269 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:21.167277 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:21.167284 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.167295 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.167302 | orchestrator | + config_drive = true 2026-03-07 00:02:21.167309 | orchestrator | + created = (known after apply) 2026-03-07 00:02:21.167316 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:21.167323 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:21.167329 | orchestrator | + force_delete = false 2026-03-07 00:02:21.167336 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:21.167343 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.167350 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.167363 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:21.167370 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:21.167377 | orchestrator | + name = "testbed-node-2" 2026-03-07 00:02:21.167384 | orchestrator | + power_state = "active" 2026-03-07 00:02:21.167390 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.167398 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:21.167405 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:21.167412 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:21.167419 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:21.167426 | orchestrator | 2026-03-07 00:02:21.167433 | orchestrator | + block_device { 2026-03-07 00:02:21.167440 | orchestrator | + boot_index = 0 2026-03-07 00:02:21.167446 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:21.167453 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:21.167460 | orchestrator | + multiattach = false 2026-03-07 00:02:21.167467 | orchestrator | + source_type = "volume" 2026-03-07 00:02:21.167474 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.167481 | orchestrator | } 2026-03-07 00:02:21.167487 | orchestrator | 2026-03-07 00:02:21.167494 | orchestrator | + network { 2026-03-07 00:02:21.167501 | orchestrator | + access_network = false 2026-03-07 00:02:21.167508 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:21.167515 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:21.167522 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:21.167529 | orchestrator | + name = (known after apply) 2026-03-07 00:02:21.167536 | orchestrator | + port = (known after apply) 2026-03-07 00:02:21.167543 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.167550 | orchestrator | } 2026-03-07 00:02:21.167557 | orchestrator | } 2026-03-07 00:02:21.167566 | orchestrator | 2026-03-07 00:02:21.167574 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-07 00:02:21.167581 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:21.167588 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:21.167595 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:21.167602 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:21.167609 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.167616 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.167623 | orchestrator | + config_drive = true 2026-03-07 00:02:21.167630 | orchestrator | + created = (known after apply) 2026-03-07 00:02:21.167637 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:21.167644 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:21.167651 | orchestrator | + force_delete = false 2026-03-07 00:02:21.167658 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:21.167665 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.167672 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.167679 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:21.167686 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:21.167693 | orchestrator | + name = "testbed-node-3" 2026-03-07 00:02:21.167700 | orchestrator | + power_state = "active" 2026-03-07 00:02:21.167707 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.167714 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:21.167721 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:21.167728 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:21.167735 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:21.167742 | orchestrator | 2026-03-07 00:02:21.167749 | orchestrator | + block_device { 2026-03-07 00:02:21.167760 | orchestrator | + boot_index = 0 2026-03-07 00:02:21.167767 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:21.167774 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:21.167785 | orchestrator | + multiattach = false 2026-03-07 00:02:21.167792 | orchestrator | + source_type = "volume" 2026-03-07 00:02:21.167799 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.167806 | orchestrator | } 2026-03-07 00:02:21.167812 | orchestrator | 2026-03-07 00:02:21.167819 | orchestrator | + network { 2026-03-07 00:02:21.167826 | orchestrator | + access_network = false 2026-03-07 00:02:21.167833 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:21.167841 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:21.167848 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:21.167855 | orchestrator | + name = (known after apply) 2026-03-07 00:02:21.167861 | orchestrator | + port = (known after apply) 2026-03-07 00:02:21.167869 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.167876 | orchestrator | } 2026-03-07 00:02:21.167883 | orchestrator | } 2026-03-07 00:02:21.167892 | orchestrator | 2026-03-07 00:02:21.167899 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-07 00:02:21.167906 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:21.167913 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:21.167920 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:21.167927 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:21.167934 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.167941 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.167948 | orchestrator | + config_drive = true 2026-03-07 00:02:21.167955 | orchestrator | + created = (known after apply) 2026-03-07 00:02:21.167962 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:21.167969 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:21.167976 | orchestrator | + force_delete = false 2026-03-07 00:02:21.167983 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:21.167990 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.168009 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.168016 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:21.168022 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:21.168028 | orchestrator | + name = "testbed-node-4" 2026-03-07 00:02:21.168035 | orchestrator | + power_state = "active" 2026-03-07 00:02:21.168042 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.168049 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:21.168056 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:21.168063 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:21.168070 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:21.168077 | orchestrator | 2026-03-07 00:02:21.168084 | orchestrator | + block_device { 2026-03-07 00:02:21.168091 | orchestrator | + boot_index = 0 2026-03-07 00:02:21.168098 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:21.168105 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:21.168112 | orchestrator | + multiattach = false 2026-03-07 00:02:21.168119 | orchestrator | + source_type = "volume" 2026-03-07 00:02:21.168126 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.168133 | orchestrator | } 2026-03-07 00:02:21.168140 | orchestrator | 2026-03-07 00:02:21.168147 | orchestrator | + network { 2026-03-07 00:02:21.168153 | orchestrator | + access_network = false 2026-03-07 00:02:21.168160 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:21.168167 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:21.168174 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:21.168181 | orchestrator | + name = (known after apply) 2026-03-07 00:02:21.168188 | orchestrator | + port = (known after apply) 2026-03-07 00:02:21.168195 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.168202 | orchestrator | } 2026-03-07 00:02:21.168209 | orchestrator | } 2026-03-07 00:02:21.168223 | orchestrator | 2026-03-07 00:02:21.168231 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-07 00:02:21.168238 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:21.168245 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:21.168252 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:21.168259 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:21.168265 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.168272 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:21.168279 | orchestrator | + config_drive = true 2026-03-07 00:02:21.168286 | orchestrator | + created = (known after apply) 2026-03-07 00:02:21.168293 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:21.168300 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:21.168307 | orchestrator | + force_delete = false 2026-03-07 00:02:21.168317 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:21.168324 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.168331 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:21.168338 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:21.168345 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:21.168352 | orchestrator | + name = "testbed-node-5" 2026-03-07 00:02:21.168359 | orchestrator | + power_state = "active" 2026-03-07 00:02:21.168366 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.168373 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:21.168381 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:21.168388 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:21.168395 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:21.168402 | orchestrator | 2026-03-07 00:02:21.168409 | orchestrator | + block_device { 2026-03-07 00:02:21.168416 | orchestrator | + boot_index = 0 2026-03-07 00:02:21.168423 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:21.168430 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:21.168437 | orchestrator | + multiattach = false 2026-03-07 00:02:21.168444 | orchestrator | + source_type = "volume" 2026-03-07 00:02:21.168451 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.168457 | orchestrator | } 2026-03-07 00:02:21.168465 | orchestrator | 2026-03-07 00:02:21.168472 | orchestrator | + network { 2026-03-07 00:02:21.168479 | orchestrator | + access_network = false 2026-03-07 00:02:21.168485 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:21.168492 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:21.168500 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:21.168506 | orchestrator | + name = (known after apply) 2026-03-07 00:02:21.168513 | orchestrator | + port = (known after apply) 2026-03-07 00:02:21.168520 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:21.168527 | orchestrator | } 2026-03-07 00:02:21.168535 | orchestrator | } 2026-03-07 00:02:21.168542 | orchestrator | 2026-03-07 00:02:21.168549 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-07 00:02:21.168556 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-07 00:02:21.168563 | orchestrator | + fingerprint = (known after apply) 2026-03-07 00:02:21.168570 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.168577 | orchestrator | + name = "testbed" 2026-03-07 00:02:21.168584 | orchestrator | + private_key = (sensitive value) 2026-03-07 00:02:21.168591 | orchestrator | + public_key = (known after apply) 2026-03-07 00:02:21.168598 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.168605 | orchestrator | + user_id = (known after apply) 2026-03-07 00:02:21.168613 | orchestrator | } 2026-03-07 00:02:21.168620 | orchestrator | 2026-03-07 00:02:21.168627 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-07 00:02:21.168634 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:21.168645 | orchestrator | + device = (known after apply) 2026-03-07 00:02:21.168652 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.168659 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:21.168666 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.168673 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:21.168680 | orchestrator | } 2026-03-07 00:02:21.168689 | orchestrator | 2026-03-07 00:02:21.168696 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-07 00:02:21.168703 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:21.168710 | orchestrator | + device = (known after apply) 2026-03-07 00:02:21.168717 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.168724 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:21.168731 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.168738 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:21.168745 | orchestrator | } 2026-03-07 00:02:21.168752 | orchestrator | 2026-03-07 00:02:21.168759 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-07 00:02:21.168765 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:21.168772 | orchestrator | + device = (known after apply) 2026-03-07 00:02:21.168779 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.168786 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:21.168792 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.168799 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:21.168806 | orchestrator | } 2026-03-07 00:02:21.168813 | orchestrator | 2026-03-07 00:02:21.168820 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-07 00:02:21.168827 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:21.168834 | orchestrator | + device = (known after apply) 2026-03-07 00:02:21.168841 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.168848 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:21.168855 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.168862 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:21.168869 | orchestrator | } 2026-03-07 00:02:21.168875 | orchestrator | 2026-03-07 00:02:21.168882 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-07 00:02:21.168889 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:21.168896 | orchestrator | + device = (known after apply) 2026-03-07 00:02:21.168903 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.168910 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:21.168920 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.168927 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:21.168934 | orchestrator | } 2026-03-07 00:02:21.168941 | orchestrator | 2026-03-07 00:02:21.168948 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-07 00:02:21.168955 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:21.168962 | orchestrator | + device = (known after apply) 2026-03-07 00:02:21.168969 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.168976 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:21.168983 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.168990 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:21.168997 | orchestrator | } 2026-03-07 00:02:21.169016 | orchestrator | 2026-03-07 00:02:21.169023 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-07 00:02:21.169030 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:21.169037 | orchestrator | + device = (known after apply) 2026-03-07 00:02:21.169044 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.169051 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:21.169058 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.169069 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:21.169076 | orchestrator | } 2026-03-07 00:02:21.169083 | orchestrator | 2026-03-07 00:02:21.169090 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-07 00:02:21.169097 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:21.169103 | orchestrator | + device = (known after apply) 2026-03-07 00:02:21.169110 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.169117 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:21.169124 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.169131 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:21.169139 | orchestrator | } 2026-03-07 00:02:21.169146 | orchestrator | 2026-03-07 00:02:21.169153 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-07 00:02:21.169160 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:21.169167 | orchestrator | + device = (known after apply) 2026-03-07 00:02:21.169174 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.169181 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:21.169188 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.169195 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:21.169202 | orchestrator | } 2026-03-07 00:02:21.169208 | orchestrator | 2026-03-07 00:02:21.169216 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-07 00:02:21.169223 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-07 00:02:21.169230 | orchestrator | + fixed_ip = (known after apply) 2026-03-07 00:02:21.169237 | orchestrator | + floating_ip = (known after apply) 2026-03-07 00:02:21.169244 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.169251 | orchestrator | + port_id = (known after apply) 2026-03-07 00:02:21.169258 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.169265 | orchestrator | } 2026-03-07 00:02:21.169274 | orchestrator | 2026-03-07 00:02:21.169282 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-07 00:02:21.169289 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-07 00:02:21.169296 | orchestrator | + address = (known after apply) 2026-03-07 00:02:21.169303 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.169309 | orchestrator | + dns_domain = (known after apply) 2026-03-07 00:02:21.169316 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:21.169323 | orchestrator | + fixed_ip = (known after apply) 2026-03-07 00:02:21.169330 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.169336 | orchestrator | + pool = "public" 2026-03-07 00:02:21.169343 | orchestrator | + port_id = (known after apply) 2026-03-07 00:02:21.169350 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.169357 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:21.169364 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.169370 | orchestrator | } 2026-03-07 00:02:21.169377 | orchestrator | 2026-03-07 00:02:21.169384 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-07 00:02:21.169392 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-07 00:02:21.169399 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:21.169406 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.169413 | orchestrator | + availability_zone_hints = [ 2026-03-07 00:02:21.169420 | orchestrator | + "nova", 2026-03-07 00:02:21.169427 | orchestrator | ] 2026-03-07 00:02:21.169434 | orchestrator | + dns_domain = (known after apply) 2026-03-07 00:02:21.169441 | orchestrator | + external = (known after apply) 2026-03-07 00:02:21.169448 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.169455 | orchestrator | + mtu = (known after apply) 2026-03-07 00:02:21.169462 | orchestrator | + name = "net-testbed-management" 2026-03-07 00:02:21.169469 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:21.169480 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:21.169488 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.169495 | orchestrator | + shared = (known after apply) 2026-03-07 00:02:21.169502 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.169509 | orchestrator | + transparent_vlan = (known after apply) 2026-03-07 00:02:21.169516 | orchestrator | 2026-03-07 00:02:21.169523 | orchestrator | + segments (known after apply) 2026-03-07 00:02:21.169530 | orchestrator | } 2026-03-07 00:02:21.169537 | orchestrator | 2026-03-07 00:02:21.169544 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-07 00:02:21.169550 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-07 00:02:21.169557 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:21.169564 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:21.169571 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:21.169581 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.169588 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:21.169595 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:21.169602 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:21.169609 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:21.169616 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.169623 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:21.169630 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:21.169637 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:21.169644 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:21.169651 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.169658 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:21.169665 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.169672 | orchestrator | 2026-03-07 00:02:21.169679 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.169686 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:21.169693 | orchestrator | } 2026-03-07 00:02:21.169700 | orchestrator | 2026-03-07 00:02:21.169706 | orchestrator | + binding (known after apply) 2026-03-07 00:02:21.169714 | orchestrator | 2026-03-07 00:02:21.169721 | orchestrator | + fixed_ip { 2026-03-07 00:02:21.169728 | orchestrator | + ip_address = "192.168.16.5" 2026-03-07 00:02:21.169735 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:21.169742 | orchestrator | } 2026-03-07 00:02:21.169748 | orchestrator | } 2026-03-07 00:02:21.169760 | orchestrator | 2026-03-07 00:02:21.169767 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-07 00:02:21.169774 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:21.169781 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:21.169788 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:21.169795 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:21.169802 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.169809 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:21.169815 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:21.169822 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:21.169829 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:21.169835 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.169843 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:21.169850 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:21.169857 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:21.169864 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:21.169871 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.169882 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:21.169889 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.169896 | orchestrator | 2026-03-07 00:02:21.169903 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.169911 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:21.169918 | orchestrator | } 2026-03-07 00:02:21.169924 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.169931 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:21.169938 | orchestrator | } 2026-03-07 00:02:21.169945 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.169952 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:21.169960 | orchestrator | } 2026-03-07 00:02:21.169967 | orchestrator | 2026-03-07 00:02:21.169974 | orchestrator | + binding (known after apply) 2026-03-07 00:02:21.169981 | orchestrator | 2026-03-07 00:02:21.169986 | orchestrator | + fixed_ip { 2026-03-07 00:02:21.169993 | orchestrator | + ip_address = "192.168.16.10" 2026-03-07 00:02:21.170011 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:21.170040 | orchestrator | } 2026-03-07 00:02:21.170047 | orchestrator | } 2026-03-07 00:02:21.170054 | orchestrator | 2026-03-07 00:02:21.170061 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-07 00:02:21.170068 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:21.170074 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:21.170081 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:21.170088 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:21.170095 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.170102 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:21.170108 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:21.170115 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:21.170123 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:21.170130 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.170137 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:21.170144 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:21.170151 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:21.170158 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:21.170165 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.170172 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:21.170178 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.170185 | orchestrator | 2026-03-07 00:02:21.170192 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.170199 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:21.170206 | orchestrator | } 2026-03-07 00:02:21.170213 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.170220 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:21.170227 | orchestrator | } 2026-03-07 00:02:21.170234 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.170241 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:21.170248 | orchestrator | } 2026-03-07 00:02:21.170255 | orchestrator | 2026-03-07 00:02:21.170262 | orchestrator | + binding (known after apply) 2026-03-07 00:02:21.170269 | orchestrator | 2026-03-07 00:02:21.170276 | orchestrator | + fixed_ip { 2026-03-07 00:02:21.170283 | orchestrator | + ip_address = "192.168.16.11" 2026-03-07 00:02:21.170290 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:21.170297 | orchestrator | } 2026-03-07 00:02:21.170304 | orchestrator | } 2026-03-07 00:02:21.170310 | orchestrator | 2026-03-07 00:02:21.170317 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-07 00:02:21.170324 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:21.170331 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:21.170338 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:21.170345 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:21.170352 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.170367 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:21.170374 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:21.170381 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:21.170388 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:21.170399 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.170406 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:21.170413 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:21.170420 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:21.170427 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:21.170434 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.170441 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:21.170448 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.170455 | orchestrator | 2026-03-07 00:02:21.170462 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.170470 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:21.170477 | orchestrator | } 2026-03-07 00:02:21.170484 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.170489 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:21.170495 | orchestrator | } 2026-03-07 00:02:21.170501 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.170507 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:21.170517 | orchestrator | } 2026-03-07 00:02:21.170524 | orchestrator | 2026-03-07 00:02:21.170539 | orchestrator | + binding (known after apply) 2026-03-07 00:02:21.170547 | orchestrator | 2026-03-07 00:02:21.170553 | orchestrator | + fixed_ip { 2026-03-07 00:02:21.170560 | orchestrator | + ip_address = "192.168.16.12" 2026-03-07 00:02:21.170567 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:21.170574 | orchestrator | } 2026-03-07 00:02:21.170581 | orchestrator | } 2026-03-07 00:02:21.170588 | orchestrator | 2026-03-07 00:02:21.170595 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-07 00:02:21.170602 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:21.170609 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:21.170616 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:21.170622 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:21.170629 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.170636 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:21.170643 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:21.170650 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:21.170656 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:21.170663 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.170670 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:21.170677 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:21.170684 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:21.170691 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:21.170698 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.170705 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:21.170712 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.170719 | orchestrator | 2026-03-07 00:02:21.170726 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.170733 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:21.170739 | orchestrator | } 2026-03-07 00:02:21.170746 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.170753 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:21.170760 | orchestrator | } 2026-03-07 00:02:21.170767 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.170774 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:21.170781 | orchestrator | } 2026-03-07 00:02:21.170788 | orchestrator | 2026-03-07 00:02:21.170800 | orchestrator | + binding (known after apply) 2026-03-07 00:02:21.170808 | orchestrator | 2026-03-07 00:02:21.170815 | orchestrator | + fixed_ip { 2026-03-07 00:02:21.170822 | orchestrator | + ip_address = "192.168.16.13" 2026-03-07 00:02:21.170829 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:21.170835 | orchestrator | } 2026-03-07 00:02:21.170842 | orchestrator | } 2026-03-07 00:02:21.170849 | orchestrator | 2026-03-07 00:02:21.170856 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-07 00:02:21.170863 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:21.170870 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:21.170877 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:21.170883 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:21.170890 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.170897 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:21.170903 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:21.170910 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:21.170917 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:21.170924 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.170931 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:21.170938 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:21.170945 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:21.170952 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:21.170959 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.170966 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:21.170973 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.170981 | orchestrator | 2026-03-07 00:02:21.170988 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.170995 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:21.171038 | orchestrator | } 2026-03-07 00:02:21.171045 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.171052 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:21.171059 | orchestrator | } 2026-03-07 00:02:21.171066 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.171073 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:21.171080 | orchestrator | } 2026-03-07 00:02:21.171087 | orchestrator | 2026-03-07 00:02:21.171094 | orchestrator | + binding (known after apply) 2026-03-07 00:02:21.171100 | orchestrator | 2026-03-07 00:02:21.171107 | orchestrator | + fixed_ip { 2026-03-07 00:02:21.171114 | orchestrator | + ip_address = "192.168.16.14" 2026-03-07 00:02:21.171121 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:21.171128 | orchestrator | } 2026-03-07 00:02:21.171135 | orchestrator | } 2026-03-07 00:02:21.171144 | orchestrator | 2026-03-07 00:02:21.171152 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-07 00:02:21.171159 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:21.171166 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:21.171173 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:21.171180 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:21.171187 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.171193 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:21.171200 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:21.171207 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:21.171214 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:21.171221 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.171227 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:21.171234 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:21.171241 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:21.171248 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:21.171261 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.171268 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:21.171275 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.171282 | orchestrator | 2026-03-07 00:02:21.171289 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.171296 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:21.171302 | orchestrator | } 2026-03-07 00:02:21.171309 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.171316 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:21.171323 | orchestrator | } 2026-03-07 00:02:21.171330 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:21.171337 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:21.171344 | orchestrator | } 2026-03-07 00:02:21.171352 | orchestrator | 2026-03-07 00:02:21.171363 | orchestrator | + binding (known after apply) 2026-03-07 00:02:21.171370 | orchestrator | 2026-03-07 00:02:21.171376 | orchestrator | + fixed_ip { 2026-03-07 00:02:21.171383 | orchestrator | + ip_address = "192.168.16.15" 2026-03-07 00:02:21.171390 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:21.171397 | orchestrator | } 2026-03-07 00:02:21.171404 | orchestrator | } 2026-03-07 00:02:21.171411 | orchestrator | 2026-03-07 00:02:21.171417 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-07 00:02:21.171424 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-07 00:02:21.171431 | orchestrator | + force_destroy = false 2026-03-07 00:02:21.171439 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.171446 | orchestrator | + port_id = (known after apply) 2026-03-07 00:02:21.171453 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.171460 | orchestrator | + router_id = (known after apply) 2026-03-07 00:02:21.171467 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:21.171474 | orchestrator | } 2026-03-07 00:02:21.171481 | orchestrator | 2026-03-07 00:02:21.171488 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-07 00:02:21.171495 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-07 00:02:21.171502 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:21.171508 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.171515 | orchestrator | + availability_zone_hints = [ 2026-03-07 00:02:21.171522 | orchestrator | + "nova", 2026-03-07 00:02:21.171529 | orchestrator | ] 2026-03-07 00:02:21.171536 | orchestrator | + distributed = (known after apply) 2026-03-07 00:02:21.171543 | orchestrator | + enable_snat = (known after apply) 2026-03-07 00:02:21.171550 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-07 00:02:21.171557 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-07 00:02:21.171564 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.171571 | orchestrator | + name = "testbed" 2026-03-07 00:02:21.171577 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.171584 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.171591 | orchestrator | 2026-03-07 00:02:21.171598 | orchestrator | + external_fixed_ip (known after apply) 2026-03-07 00:02:21.171605 | orchestrator | } 2026-03-07 00:02:21.171612 | orchestrator | 2026-03-07 00:02:21.171619 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-07 00:02:21.171628 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-07 00:02:21.171635 | orchestrator | + description = "ssh" 2026-03-07 00:02:21.171642 | orchestrator | + direction = "ingress" 2026-03-07 00:02:21.171649 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:21.171656 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.171663 | orchestrator | + port_range_max = 22 2026-03-07 00:02:21.171670 | orchestrator | + port_range_min = 22 2026-03-07 00:02:21.171677 | orchestrator | + protocol = "tcp" 2026-03-07 00:02:21.171684 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.171695 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:21.171702 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:21.171710 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:21.171717 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:21.171724 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.171730 | orchestrator | } 2026-03-07 00:02:21.171737 | orchestrator | 2026-03-07 00:02:21.171744 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-07 00:02:21.171751 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-07 00:02:21.171758 | orchestrator | + description = "wireguard" 2026-03-07 00:02:21.171765 | orchestrator | + direction = "ingress" 2026-03-07 00:02:21.171772 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:21.171779 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.171786 | orchestrator | + port_range_max = 51820 2026-03-07 00:02:21.171793 | orchestrator | + port_range_min = 51820 2026-03-07 00:02:21.171799 | orchestrator | + protocol = "udp" 2026-03-07 00:02:21.171806 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.171813 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:21.171820 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:21.171827 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:21.171837 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:21.171844 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.171851 | orchestrator | } 2026-03-07 00:02:21.171858 | orchestrator | 2026-03-07 00:02:21.171865 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-07 00:02:21.171872 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-07 00:02:21.171879 | orchestrator | + direction = "ingress" 2026-03-07 00:02:21.171886 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:21.171893 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.171900 | orchestrator | + protocol = "tcp" 2026-03-07 00:02:21.171907 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.171913 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:21.171920 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:21.171927 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-07 00:02:21.171934 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:21.171941 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.171948 | orchestrator | } 2026-03-07 00:02:21.171956 | orchestrator | 2026-03-07 00:02:21.171963 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-07 00:02:21.171970 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-07 00:02:21.171977 | orchestrator | + direction = "ingress" 2026-03-07 00:02:21.171984 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:21.171991 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.172012 | orchestrator | + protocol = "udp" 2026-03-07 00:02:21.172019 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.172025 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:21.172032 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:21.172039 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-07 00:02:21.172046 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:21.172053 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.172060 | orchestrator | } 2026-03-07 00:02:21.172066 | orchestrator | 2026-03-07 00:02:21.172072 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-07 00:02:21.172084 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-07 00:02:21.172089 | orchestrator | + direction = "ingress" 2026-03-07 00:02:21.172095 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:21.172102 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.172108 | orchestrator | + protocol = "icmp" 2026-03-07 00:02:21.172114 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.172120 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:21.172126 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:21.172132 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:21.172138 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:21.172143 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.172149 | orchestrator | } 2026-03-07 00:02:21.172154 | orchestrator | 2026-03-07 00:02:21.172160 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-07 00:02:21.172165 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-07 00:02:21.172171 | orchestrator | + direction = "ingress" 2026-03-07 00:02:21.172178 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:21.172183 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.172189 | orchestrator | + protocol = "tcp" 2026-03-07 00:02:21.172194 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.172199 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:21.172210 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:21.172216 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:21.172222 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:21.172228 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.172233 | orchestrator | } 2026-03-07 00:02:21.172239 | orchestrator | 2026-03-07 00:02:21.172245 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-07 00:02:21.172251 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-07 00:02:21.172257 | orchestrator | + direction = "ingress" 2026-03-07 00:02:21.172262 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:21.172269 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.172276 | orchestrator | + protocol = "udp" 2026-03-07 00:02:21.172282 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.172289 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:21.172296 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:21.172303 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:21.172310 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:21.172317 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.172324 | orchestrator | } 2026-03-07 00:02:21.172331 | orchestrator | 2026-03-07 00:02:21.172337 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-07 00:02:21.172345 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-07 00:02:21.172352 | orchestrator | + direction = "ingress" 2026-03-07 00:02:21.172361 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:21.172366 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.172372 | orchestrator | + protocol = "icmp" 2026-03-07 00:02:21.172378 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.172383 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:21.172390 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:21.172396 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:21.172409 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:21.172416 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.172427 | orchestrator | } 2026-03-07 00:02:21.172434 | orchestrator | 2026-03-07 00:02:21.172440 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-07 00:02:21.172447 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-07 00:02:21.172454 | orchestrator | + description = "vrrp" 2026-03-07 00:02:21.172461 | orchestrator | + direction = "ingress" 2026-03-07 00:02:21.172467 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:21.172474 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.172480 | orchestrator | + protocol = "112" 2026-03-07 00:02:21.172487 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.172493 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:21.172500 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:21.172507 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:21.172513 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:21.172520 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.172526 | orchestrator | } 2026-03-07 00:02:21.172532 | orchestrator | 2026-03-07 00:02:21.172539 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-07 00:02:21.172546 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-07 00:02:21.172553 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.172560 | orchestrator | + description = "management security group" 2026-03-07 00:02:21.172567 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.172574 | orchestrator | + name = "testbed-management" 2026-03-07 00:02:21.172581 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.172587 | orchestrator | + stateful = (known after apply) 2026-03-07 00:02:21.172593 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.172599 | orchestrator | } 2026-03-07 00:02:21.172605 | orchestrator | 2026-03-07 00:02:21.172611 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-07 00:02:21.172617 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-07 00:02:21.172623 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.172629 | orchestrator | + description = "node security group" 2026-03-07 00:02:21.172635 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.172642 | orchestrator | + name = "testbed-node" 2026-03-07 00:02:21.172649 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.172656 | orchestrator | + stateful = (known after apply) 2026-03-07 00:02:21.172662 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.172669 | orchestrator | } 2026-03-07 00:02:21.172676 | orchestrator | 2026-03-07 00:02:21.172683 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-07 00:02:21.172690 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-07 00:02:21.172697 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:21.172703 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-07 00:02:21.172710 | orchestrator | + dns_nameservers = [ 2026-03-07 00:02:21.172718 | orchestrator | + "8.8.8.8", 2026-03-07 00:02:21.172725 | orchestrator | + "9.9.9.9", 2026-03-07 00:02:21.172732 | orchestrator | ] 2026-03-07 00:02:21.172738 | orchestrator | + enable_dhcp = true 2026-03-07 00:02:21.172745 | orchestrator | + gateway_ip = (known after apply) 2026-03-07 00:02:21.172752 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.172759 | orchestrator | + ip_version = 4 2026-03-07 00:02:21.172766 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-07 00:02:21.172773 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-07 00:02:21.172780 | orchestrator | + name = "subnet-testbed-management" 2026-03-07 00:02:21.172787 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:21.172794 | orchestrator | + no_gateway = false 2026-03-07 00:02:21.172801 | orchestrator | + region = (known after apply) 2026-03-07 00:02:21.172808 | orchestrator | + service_types = (known after apply) 2026-03-07 00:02:21.172820 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:21.172827 | orchestrator | 2026-03-07 00:02:21.172834 | orchestrator | + allocation_pool { 2026-03-07 00:02:21.172841 | orchestrator | + end = "192.168.31.250" 2026-03-07 00:02:21.172848 | orchestrator | + start = "192.168.31.200" 2026-03-07 00:02:21.172855 | orchestrator | } 2026-03-07 00:02:21.172862 | orchestrator | } 2026-03-07 00:02:21.172869 | orchestrator | 2026-03-07 00:02:21.172876 | orchestrator | # terraform_data.image will be created 2026-03-07 00:02:21.172883 | orchestrator | + resource "terraform_data" "image" { 2026-03-07 00:02:21.172889 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.172896 | orchestrator | + input = "Ubuntu 24.04" 2026-03-07 00:02:21.172903 | orchestrator | + output = (known after apply) 2026-03-07 00:02:21.172909 | orchestrator | } 2026-03-07 00:02:21.172916 | orchestrator | 2026-03-07 00:02:21.172923 | orchestrator | # terraform_data.image_node will be created 2026-03-07 00:02:21.172930 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-07 00:02:21.172937 | orchestrator | + id = (known after apply) 2026-03-07 00:02:21.172944 | orchestrator | + input = "Ubuntu 24.04" 2026-03-07 00:02:21.172951 | orchestrator | + output = (known after apply) 2026-03-07 00:02:21.172958 | orchestrator | } 2026-03-07 00:02:21.172965 | orchestrator | 2026-03-07 00:02:21.172971 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-07 00:02:21.172978 | orchestrator | 2026-03-07 00:02:21.172985 | orchestrator | Changes to Outputs: 2026-03-07 00:02:21.172992 | orchestrator | + manager_address = (sensitive value) 2026-03-07 00:02:21.173038 | orchestrator | + private_key = (sensitive value) 2026-03-07 00:02:21.369839 | orchestrator | terraform_data.image: Creating... 2026-03-07 00:02:21.369893 | orchestrator | terraform_data.image_node: Creating... 2026-03-07 00:02:21.369900 | orchestrator | terraform_data.image: Creation complete after 0s [id=f7a8c5ca-9ddf-fe24-da5a-a3e88b1a9556] 2026-03-07 00:02:21.369906 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=34a5b874-0101-13c6-4ee7-26a0dcfbebec] 2026-03-07 00:02:21.383219 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-07 00:02:21.383274 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-07 00:02:21.386096 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-07 00:02:21.386398 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-07 00:02:21.398157 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-07 00:02:21.398481 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-07 00:02:21.399550 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-07 00:02:21.399578 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-07 00:02:21.400921 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-07 00:02:21.403817 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-07 00:02:21.841297 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-07 00:02:21.849490 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-07 00:02:21.857745 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-07 00:02:21.864664 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-07 00:02:21.936831 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-07 00:02:21.944662 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-07 00:02:22.409197 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=fd6b7b48-c50f-452d-897b-48e9292ff43b] 2026-03-07 00:02:22.419958 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-07 00:02:25.027069 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=72259f68-e866-4719-b0ea-eb473e4fd6bd] 2026-03-07 00:02:25.033964 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-07 00:02:25.059209 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=cc667673-5185-49c1-bb99-04f4fd4068da] 2026-03-07 00:02:25.068837 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-07 00:02:25.083167 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=c20bba62-61d0-4a1a-9760-7959bbad95dc] 2026-03-07 00:02:25.084362 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=34b2d3d1-49da-433c-9475-894febcc7103] 2026-03-07 00:02:25.094964 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-07 00:02:25.096648 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-07 00:02:25.107341 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=aeae70bf-06ae-4bd4-b471-9be2a413b359] 2026-03-07 00:02:25.117104 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-07 00:02:25.125620 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=56f8efd0-3f15-4df4-bf76-395b3326da9d] 2026-03-07 00:02:25.130606 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-07 00:02:25.134686 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=6b3da8fe-8a9b-450a-9caf-2db14f74686e] 2026-03-07 00:02:25.148046 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-07 00:02:25.152390 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=b98a44394fe948c789d5e6fc0ba7423c18473aba] 2026-03-07 00:02:25.163153 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-07 00:02:25.167193 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=4f89a5aa879dea1d6910df4373dedbe8dbaf9d2c] 2026-03-07 00:02:25.175857 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=9c38bee3-edc8-40af-8be7-576eb57a340e] 2026-03-07 00:02:25.177117 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-07 00:02:25.211552 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=c95cdd10-84fe-4990-af41-f1a34ec8ee15] 2026-03-07 00:02:25.835282 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=1e87ab8c-74da-42e5-bd8a-bfd4a87775ea] 2026-03-07 00:02:26.660728 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=08d52a09-1235-4c55-b572-aaa90600aa8c] 2026-03-07 00:02:26.668321 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-07 00:02:28.464556 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=63d08656-5fe3-4965-96b2-d9d7b897e8d9] 2026-03-07 00:02:28.524952 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86] 2026-03-07 00:02:28.536347 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=448434c8-f92f-4dad-84e2-85ad64f4e35e] 2026-03-07 00:02:28.583381 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=6d86149b-0ba4-4a58-9fc2-b00d0a760740] 2026-03-07 00:02:28.599574 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=420a1f40-7e0f-4106-8b14-3c7e5e75cad6] 2026-03-07 00:02:28.611949 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=f9ea3293-b0f4-4fe9-a3f5-883417d23039] 2026-03-07 00:02:31.244042 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=1fce22a9-4004-451a-8463-de821d4da2c5] 2026-03-07 00:02:31.251802 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-07 00:02:31.251890 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-07 00:02:31.251907 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-07 00:02:31.451872 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=7e26a3e4-0062-4132-a797-b90564c69205] 2026-03-07 00:02:31.464167 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-07 00:02:31.465187 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-07 00:02:31.467925 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-07 00:02:31.468040 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-07 00:02:31.468058 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-07 00:02:31.470839 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=5b832918-102a-434e-b888-6f103880d14a] 2026-03-07 00:02:31.478566 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-07 00:02:31.478665 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-07 00:02:31.483453 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-07 00:02:31.483745 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-07 00:02:31.655940 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=24e7336f-0f1a-4b84-a8e7-053e7af2f9f6] 2026-03-07 00:02:31.661369 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-07 00:02:31.928809 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=ed564631-e507-4bf0-9321-761b445ddbcf] 2026-03-07 00:02:31.941643 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-07 00:02:32.236058 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=a8cf99c1-abd9-4a14-866a-9a8ac36018a5] 2026-03-07 00:02:32.247318 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=3dfb44e0-5b5a-4e9a-a8e5-0492604ab05c] 2026-03-07 00:02:32.756079 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-07 00:02:32.756127 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-07 00:02:32.756138 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=d92d1e0d-3871-44e7-a50c-bbf4bf5a776c] 2026-03-07 00:02:32.756147 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-07 00:02:32.756156 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=e1eb773c-ef2e-4ffe-9b16-e2c03da3f13c] 2026-03-07 00:02:32.756163 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-07 00:02:32.817543 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=9147db9a-d56f-4971-9558-328424818f23] 2026-03-07 00:02:32.829276 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-07 00:02:33.043093 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=2fd126ca-c25a-4827-9ff5-30956d93c7b4] 2026-03-07 00:02:33.147117 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=a0c5bf30-c858-4c89-b347-b51bbe4881c4] 2026-03-07 00:02:33.149523 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=c72cf56d-4cfc-4f14-8284-b6d1d41f8c1b] 2026-03-07 00:02:33.166535 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=b51d9c5e-5f92-43e5-95b4-1044519269b0] 2026-03-07 00:02:33.340731 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=db1f8c99-fba4-499d-9e23-03db6dbe9b49] 2026-03-07 00:02:33.355851 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=99502a10-6bb5-43bb-bb16-76c4d6aec6c4] 2026-03-07 00:02:33.406794 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=93d7a712-c614-4070-994a-c40697d01b09] 2026-03-07 00:02:33.537934 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=d3bcabd4-1e96-4b78-af1a-08313044e02d] 2026-03-07 00:02:34.840132 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=ab04ed86-ea4a-4f5a-a524-861cbfbb658a] 2026-03-07 00:02:34.845739 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-07 00:02:35.459558 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=364ea72a-96a2-4769-9390-5b96efca9e6f] 2026-03-07 00:02:35.494270 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-07 00:02:35.497975 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-07 00:02:35.500703 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-07 00:02:35.500961 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-07 00:02:35.505823 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-07 00:02:35.509958 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-07 00:02:36.696217 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=f410d265-c3c7-4342-b5d6-48560176942a] 2026-03-07 00:02:36.707615 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-07 00:02:36.712939 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-07 00:02:36.714061 | orchestrator | local_file.inventory: Creating... 2026-03-07 00:02:36.718178 | orchestrator | local_file.inventory: Creation complete after 0s [id=25c90af2d5a989614cd90011119fe4f529bd214c] 2026-03-07 00:02:36.720690 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=5a82a34c50baf969da582e20be9851ecae92d987] 2026-03-07 00:02:38.522754 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=f410d265-c3c7-4342-b5d6-48560176942a] 2026-03-07 00:02:45.497060 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-07 00:02:45.500157 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [11s elapsed] 2026-03-07 00:02:45.502403 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-07 00:02:45.504815 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-07 00:02:45.510999 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-07 00:02:45.512066 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-07 00:02:55.497817 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-07 00:02:55.501009 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [21s elapsed] 2026-03-07 00:02:55.503346 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-07 00:02:55.505539 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-07 00:02:55.511902 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-07 00:02:55.512959 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-07 00:03:05.506864 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-07 00:03:05.506949 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-07 00:03:05.506956 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [31s elapsed] 2026-03-07 00:03:05.506968 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [31s elapsed] 2026-03-07 00:03:05.512156 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-07 00:03:05.513423 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-07 00:03:06.000381 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=3dec96f1-0031-4b2e-b1f3-49930a4da608] 2026-03-07 00:03:06.159989 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=fccaeff9-3f07-4254-b23e-752203212188] 2026-03-07 00:03:06.224549 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=2ba5d253-a751-4660-9317-f36c36f0345b] 2026-03-07 00:03:06.246227 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=e6ac2b03-aee2-41f7-9fea-f2fb8be1b8a9] 2026-03-07 00:03:15.507621 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-07 00:03:15.512655 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-07 00:03:16.726871 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=a4b4b291-9b27-459d-b8ec-a469d8f62c8c] 2026-03-07 00:03:16.939266 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=07e462cb-5ff1-47da-9824-c7ac10ad8ca5] 2026-03-07 00:03:16.958293 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-07 00:03:16.964385 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=5011422083129755006] 2026-03-07 00:03:16.964974 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-07 00:03:16.965263 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-07 00:03:16.965435 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-07 00:03:16.980739 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-07 00:03:16.981534 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-07 00:03:16.984581 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-07 00:03:16.992830 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-07 00:03:16.995178 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-07 00:03:16.997190 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-07 00:03:16.997611 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-07 00:03:20.378849 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=e6ac2b03-aee2-41f7-9fea-f2fb8be1b8a9/56f8efd0-3f15-4df4-bf76-395b3326da9d] 2026-03-07 00:03:20.384129 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=07e462cb-5ff1-47da-9824-c7ac10ad8ca5/cc667673-5185-49c1-bb99-04f4fd4068da] 2026-03-07 00:03:20.402756 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=e6ac2b03-aee2-41f7-9fea-f2fb8be1b8a9/c20bba62-61d0-4a1a-9760-7959bbad95dc] 2026-03-07 00:03:20.424175 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=fccaeff9-3f07-4254-b23e-752203212188/9c38bee3-edc8-40af-8be7-576eb57a340e] 2026-03-07 00:03:20.434245 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=07e462cb-5ff1-47da-9824-c7ac10ad8ca5/72259f68-e866-4719-b0ea-eb473e4fd6bd] 2026-03-07 00:03:20.447404 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=fccaeff9-3f07-4254-b23e-752203212188/aeae70bf-06ae-4bd4-b471-9be2a413b359] 2026-03-07 00:03:26.515381 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=07e462cb-5ff1-47da-9824-c7ac10ad8ca5/6b3da8fe-8a9b-450a-9caf-2db14f74686e] 2026-03-07 00:03:26.531117 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=e6ac2b03-aee2-41f7-9fea-f2fb8be1b8a9/34b2d3d1-49da-433c-9475-894febcc7103] 2026-03-07 00:03:26.551666 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=fccaeff9-3f07-4254-b23e-752203212188/c95cdd10-84fe-4990-af41-f1a34ec8ee15] 2026-03-07 00:03:26.999364 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-07 00:03:36.999894 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-07 00:03:37.429395 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=87635169-1d58-4336-b1a8-69911067c049] 2026-03-07 00:03:38.436760 | orchestrator | 2026-03-07 00:03:38.436873 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-07 00:03:38.436891 | orchestrator | 2026-03-07 00:03:38.436907 | orchestrator | Outputs: 2026-03-07 00:03:38.436921 | orchestrator | 2026-03-07 00:03:38.436935 | orchestrator | manager_address = 2026-03-07 00:03:38.436971 | orchestrator | private_key = 2026-03-07 00:03:38.923665 | orchestrator | ok: Runtime: 0:01:22.903038 2026-03-07 00:03:38.960267 | 2026-03-07 00:03:38.960455 | TASK [Create infrastructure (stable)] 2026-03-07 00:03:39.498109 | orchestrator | skipping: Conditional result was False 2026-03-07 00:03:39.519434 | 2026-03-07 00:03:39.519599 | TASK [Fetch manager address] 2026-03-07 00:03:40.028445 | orchestrator | ok 2026-03-07 00:03:40.041309 | 2026-03-07 00:03:40.041485 | TASK [Set manager_host address] 2026-03-07 00:03:40.128923 | orchestrator | ok 2026-03-07 00:03:40.141861 | 2026-03-07 00:03:40.142029 | LOOP [Update ansible collections] 2026-03-07 00:03:41.614296 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-07 00:03:41.614633 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-07 00:03:41.614677 | orchestrator | Starting galaxy collection install process 2026-03-07 00:03:41.614705 | orchestrator | Process install dependency map 2026-03-07 00:03:41.614729 | orchestrator | Starting collection install process 2026-03-07 00:03:41.614751 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2026-03-07 00:03:41.614779 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2026-03-07 00:03:41.614815 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-07 00:03:41.614901 | orchestrator | ok: Item: commons Runtime: 0:00:01.095830 2026-03-07 00:03:42.675231 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-07 00:03:42.675371 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-07 00:03:42.675492 | orchestrator | Starting galaxy collection install process 2026-03-07 00:03:42.675518 | orchestrator | Process install dependency map 2026-03-07 00:03:42.675541 | orchestrator | Starting collection install process 2026-03-07 00:03:42.675563 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2026-03-07 00:03:42.675583 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2026-03-07 00:03:42.675603 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-07 00:03:42.675638 | orchestrator | ok: Item: services Runtime: 0:00:00.768757 2026-03-07 00:03:42.697125 | 2026-03-07 00:03:42.697415 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-07 00:03:53.313264 | orchestrator | ok 2026-03-07 00:03:53.333629 | 2026-03-07 00:03:53.333865 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-07 00:04:53.377449 | orchestrator | ok 2026-03-07 00:04:53.386795 | 2026-03-07 00:04:53.386935 | TASK [Fetch manager ssh hostkey] 2026-03-07 00:04:54.956091 | orchestrator | Output suppressed because no_log was given 2026-03-07 00:04:54.971570 | 2026-03-07 00:04:54.971746 | TASK [Get ssh keypair from terraform environment] 2026-03-07 00:04:55.508058 | orchestrator | ok: Runtime: 0:00:00.013370 2026-03-07 00:04:55.523644 | 2026-03-07 00:04:55.523795 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-07 00:04:55.557704 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-07 00:04:55.567909 | 2026-03-07 00:04:55.568052 | TASK [Run manager part 0] 2026-03-07 00:04:56.577831 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-07 00:04:56.627274 | orchestrator | 2026-03-07 00:04:56.627325 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-07 00:04:56.627333 | orchestrator | 2026-03-07 00:04:56.627346 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-07 00:04:58.823726 | orchestrator | ok: [testbed-manager] 2026-03-07 00:04:58.823792 | orchestrator | 2026-03-07 00:04:58.823815 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-07 00:04:58.823824 | orchestrator | 2026-03-07 00:04:58.823833 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:05:00.938231 | orchestrator | ok: [testbed-manager] 2026-03-07 00:05:00.938356 | orchestrator | 2026-03-07 00:05:00.938366 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-07 00:05:01.675967 | orchestrator | ok: [testbed-manager] 2026-03-07 00:05:01.676083 | orchestrator | 2026-03-07 00:05:01.676096 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-07 00:05:01.726597 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:05:01.726661 | orchestrator | 2026-03-07 00:05:01.726671 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-07 00:05:01.762596 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:05:01.762644 | orchestrator | 2026-03-07 00:05:01.762652 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-07 00:05:01.795869 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:05:01.795915 | orchestrator | 2026-03-07 00:05:01.795923 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-07 00:05:01.829664 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:05:01.829714 | orchestrator | 2026-03-07 00:05:01.829721 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-07 00:05:01.868609 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:05:01.868645 | orchestrator | 2026-03-07 00:05:01.868652 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-07 00:05:01.906325 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:05:01.906360 | orchestrator | 2026-03-07 00:05:01.906368 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-07 00:05:01.955929 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:05:01.955999 | orchestrator | 2026-03-07 00:05:01.956016 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-07 00:05:03.875265 | orchestrator | changed: [testbed-manager] 2026-03-07 00:05:03.875335 | orchestrator | 2026-03-07 00:05:03.875350 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-07 00:08:27.731049 | orchestrator | changed: [testbed-manager] 2026-03-07 00:08:27.731205 | orchestrator | 2026-03-07 00:08:27.731224 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-07 00:09:56.697589 | orchestrator | changed: [testbed-manager] 2026-03-07 00:09:56.697639 | orchestrator | 2026-03-07 00:09:56.697649 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-07 00:10:20.179182 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:20.179243 | orchestrator | 2026-03-07 00:10:20.179256 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-07 00:10:29.887859 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:29.887921 | orchestrator | 2026-03-07 00:10:29.887933 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-07 00:10:29.941549 | orchestrator | ok: [testbed-manager] 2026-03-07 00:10:29.941650 | orchestrator | 2026-03-07 00:10:29.941669 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-07 00:10:30.750895 | orchestrator | ok: [testbed-manager] 2026-03-07 00:10:30.751012 | orchestrator | 2026-03-07 00:10:30.751030 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-07 00:10:31.555474 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:31.555743 | orchestrator | 2026-03-07 00:10:31.555803 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-07 00:10:38.161797 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:38.161891 | orchestrator | 2026-03-07 00:10:38.161926 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-07 00:10:44.821473 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:44.821565 | orchestrator | 2026-03-07 00:10:44.821583 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-07 00:10:47.610355 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:47.610413 | orchestrator | 2026-03-07 00:10:47.610425 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-07 00:10:49.497460 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:49.497511 | orchestrator | 2026-03-07 00:10:49.497520 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-07 00:10:50.650462 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-07 00:10:50.650572 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-07 00:10:50.650587 | orchestrator | 2026-03-07 00:10:50.650600 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-07 00:10:50.693052 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-07 00:10:50.693144 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-07 00:10:50.693161 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-07 00:10:50.693174 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-07 00:10:54.058387 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-07 00:10:54.058438 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-07 00:10:54.058444 | orchestrator | 2026-03-07 00:10:54.058449 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-07 00:10:54.684270 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:54.684390 | orchestrator | 2026-03-07 00:10:54.684415 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-07 00:11:17.775181 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-07 00:11:17.775499 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-07 00:11:17.775537 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-07 00:11:17.775596 | orchestrator | 2026-03-07 00:11:17.775619 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-07 00:11:20.224898 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-07 00:11:20.224947 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-07 00:11:20.224952 | orchestrator | 2026-03-07 00:11:20.224958 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-07 00:11:20.224963 | orchestrator | 2026-03-07 00:11:20.224968 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:11:21.728072 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:21.728195 | orchestrator | 2026-03-07 00:11:21.728215 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-07 00:11:21.782714 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:21.782801 | orchestrator | 2026-03-07 00:11:21.782812 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-07 00:11:21.853873 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:21.853962 | orchestrator | 2026-03-07 00:11:21.853979 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-07 00:11:22.740819 | orchestrator | changed: [testbed-manager] 2026-03-07 00:11:22.740873 | orchestrator | 2026-03-07 00:11:22.740881 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-07 00:11:23.526834 | orchestrator | changed: [testbed-manager] 2026-03-07 00:11:23.527003 | orchestrator | 2026-03-07 00:11:23.527015 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-07 00:11:25.049632 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-07 00:11:25.049679 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-07 00:11:25.049686 | orchestrator | 2026-03-07 00:11:25.049701 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-07 00:11:26.482100 | orchestrator | changed: [testbed-manager] 2026-03-07 00:11:26.482228 | orchestrator | 2026-03-07 00:11:26.482246 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-07 00:11:28.387599 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:11:28.387648 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-07 00:11:28.387657 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:11:28.387664 | orchestrator | 2026-03-07 00:11:28.387673 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-07 00:11:28.451720 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:11:28.451767 | orchestrator | 2026-03-07 00:11:28.451776 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-07 00:11:28.536859 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:11:28.536903 | orchestrator | 2026-03-07 00:11:28.536913 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-07 00:11:29.123880 | orchestrator | changed: [testbed-manager] 2026-03-07 00:11:29.123980 | orchestrator | 2026-03-07 00:11:29.123997 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-07 00:11:29.201286 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:11:29.201377 | orchestrator | 2026-03-07 00:11:29.201394 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-07 00:11:30.145980 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:11:30.146209 | orchestrator | changed: [testbed-manager] 2026-03-07 00:11:30.146237 | orchestrator | 2026-03-07 00:11:30.146254 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-07 00:11:30.189634 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:11:30.189728 | orchestrator | 2026-03-07 00:11:30.189746 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-07 00:11:30.224744 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:11:30.224831 | orchestrator | 2026-03-07 00:11:30.224846 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-07 00:11:30.264387 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:11:30.264545 | orchestrator | 2026-03-07 00:11:30.264572 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-07 00:11:30.344904 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:11:30.344999 | orchestrator | 2026-03-07 00:11:30.345014 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-07 00:11:31.097438 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:31.097549 | orchestrator | 2026-03-07 00:11:31.097565 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-07 00:11:31.097578 | orchestrator | 2026-03-07 00:11:31.097590 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:11:32.585982 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:32.586116 | orchestrator | 2026-03-07 00:11:32.586134 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-07 00:11:33.606943 | orchestrator | changed: [testbed-manager] 2026-03-07 00:11:33.607085 | orchestrator | 2026-03-07 00:11:33.607106 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:11:33.607121 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-07 00:11:33.607133 | orchestrator | 2026-03-07 00:11:33.859858 | orchestrator | ok: Runtime: 0:06:37.827212 2026-03-07 00:11:33.876217 | 2026-03-07 00:11:33.876376 | TASK [Point out that the log in on the manager is now possible] 2026-03-07 00:11:33.923749 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-07 00:11:33.932728 | 2026-03-07 00:11:33.932850 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-07 00:11:33.977036 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-07 00:11:33.985702 | 2026-03-07 00:11:33.985821 | TASK [Run manager part 1 + 2] 2026-03-07 00:11:34.879511 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-07 00:11:34.938899 | orchestrator | 2026-03-07 00:11:34.939052 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-07 00:11:34.939086 | orchestrator | 2026-03-07 00:11:34.939132 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:11:37.545558 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:37.545659 | orchestrator | 2026-03-07 00:11:37.545707 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-07 00:11:37.587215 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:11:37.587303 | orchestrator | 2026-03-07 00:11:37.587321 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-07 00:11:37.630283 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:37.630338 | orchestrator | 2026-03-07 00:11:37.630346 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-07 00:11:37.665546 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:37.665603 | orchestrator | 2026-03-07 00:11:37.665610 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-07 00:11:37.726681 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:37.726739 | orchestrator | 2026-03-07 00:11:37.726747 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-07 00:11:37.783225 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:37.783297 | orchestrator | 2026-03-07 00:11:37.783305 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-07 00:11:37.840877 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-07 00:11:37.840979 | orchestrator | 2026-03-07 00:11:37.841006 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-07 00:11:38.620298 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:38.620889 | orchestrator | 2026-03-07 00:11:38.620916 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-07 00:11:38.681405 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:11:38.681499 | orchestrator | 2026-03-07 00:11:38.681507 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-07 00:11:40.095752 | orchestrator | changed: [testbed-manager] 2026-03-07 00:11:40.095877 | orchestrator | 2026-03-07 00:11:40.095898 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-07 00:11:40.710482 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:40.710589 | orchestrator | 2026-03-07 00:11:40.710605 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-07 00:11:41.934310 | orchestrator | changed: [testbed-manager] 2026-03-07 00:11:41.934418 | orchestrator | 2026-03-07 00:11:41.934455 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-07 00:11:57.811172 | orchestrator | changed: [testbed-manager] 2026-03-07 00:11:57.811623 | orchestrator | 2026-03-07 00:11:57.811653 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-07 00:11:58.532646 | orchestrator | ok: [testbed-manager] 2026-03-07 00:11:58.532736 | orchestrator | 2026-03-07 00:11:58.532755 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-07 00:11:58.586335 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:11:58.586404 | orchestrator | 2026-03-07 00:11:58.586411 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-07 00:11:59.570441 | orchestrator | changed: [testbed-manager] 2026-03-07 00:11:59.570479 | orchestrator | 2026-03-07 00:11:59.570486 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-07 00:12:00.601894 | orchestrator | changed: [testbed-manager] 2026-03-07 00:12:00.601995 | orchestrator | 2026-03-07 00:12:00.602013 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-07 00:12:01.211051 | orchestrator | changed: [testbed-manager] 2026-03-07 00:12:01.211142 | orchestrator | 2026-03-07 00:12:01.211159 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-07 00:12:01.251749 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-07 00:12:01.251856 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-07 00:12:01.251871 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-07 00:12:01.251884 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-07 00:12:03.646928 | orchestrator | changed: [testbed-manager] 2026-03-07 00:12:03.646986 | orchestrator | 2026-03-07 00:12:03.646993 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-07 00:12:12.918679 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-07 00:12:12.918752 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-07 00:12:12.918775 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-07 00:12:12.918791 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-07 00:12:12.918811 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-07 00:12:12.918821 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-07 00:12:12.918830 | orchestrator | 2026-03-07 00:12:12.918840 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-07 00:12:14.027122 | orchestrator | changed: [testbed-manager] 2026-03-07 00:12:14.027214 | orchestrator | 2026-03-07 00:12:14.027233 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-07 00:12:14.071985 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:12:14.072080 | orchestrator | 2026-03-07 00:12:14.072099 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-07 00:12:17.409120 | orchestrator | changed: [testbed-manager] 2026-03-07 00:12:17.409200 | orchestrator | 2026-03-07 00:12:17.409212 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-07 00:12:17.455934 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:12:17.456037 | orchestrator | 2026-03-07 00:12:17.456061 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-07 00:14:00.880160 | orchestrator | changed: [testbed-manager] 2026-03-07 00:14:00.880212 | orchestrator | 2026-03-07 00:14:00.880220 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-07 00:14:02.065237 | orchestrator | ok: [testbed-manager] 2026-03-07 00:14:02.066265 | orchestrator | 2026-03-07 00:14:02.066365 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:14:02.066383 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-07 00:14:02.066395 | orchestrator | 2026-03-07 00:14:02.620811 | orchestrator | ok: Runtime: 0:02:27.912305 2026-03-07 00:14:02.631399 | 2026-03-07 00:14:02.631525 | TASK [Reboot manager] 2026-03-07 00:14:04.177564 | orchestrator | ok: Runtime: 0:00:01.019314 2026-03-07 00:14:04.189269 | 2026-03-07 00:14:04.189421 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-07 00:14:20.616076 | orchestrator | ok 2026-03-07 00:14:20.626106 | 2026-03-07 00:14:20.626228 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-07 00:15:20.666768 | orchestrator | ok 2026-03-07 00:15:20.676389 | 2026-03-07 00:15:20.676537 | TASK [Deploy manager + bootstrap nodes] 2026-03-07 00:15:23.358609 | orchestrator | 2026-03-07 00:15:23.358850 | orchestrator | # DEPLOY MANAGER 2026-03-07 00:15:23.358884 | orchestrator | 2026-03-07 00:15:23.358901 | orchestrator | + set -e 2026-03-07 00:15:23.358920 | orchestrator | + echo 2026-03-07 00:15:23.358939 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-07 00:15:23.358962 | orchestrator | + echo 2026-03-07 00:15:23.359015 | orchestrator | + cat /opt/manager-vars.sh 2026-03-07 00:15:23.362584 | orchestrator | export NUMBER_OF_NODES=6 2026-03-07 00:15:23.362675 | orchestrator | 2026-03-07 00:15:23.362690 | orchestrator | export CEPH_VERSION=reef 2026-03-07 00:15:23.362701 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-07 00:15:23.362712 | orchestrator | export MANAGER_VERSION=latest 2026-03-07 00:15:23.362732 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-07 00:15:23.362741 | orchestrator | 2026-03-07 00:15:23.362757 | orchestrator | export ARA=false 2026-03-07 00:15:23.362767 | orchestrator | export DEPLOY_MODE=manager 2026-03-07 00:15:23.362781 | orchestrator | export TEMPEST=true 2026-03-07 00:15:23.362790 | orchestrator | export IS_ZUUL=true 2026-03-07 00:15:23.362799 | orchestrator | 2026-03-07 00:15:23.362813 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.154 2026-03-07 00:15:23.362823 | orchestrator | export EXTERNAL_API=false 2026-03-07 00:15:23.362832 | orchestrator | 2026-03-07 00:15:23.362841 | orchestrator | export IMAGE_USER=ubuntu 2026-03-07 00:15:23.362853 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-07 00:15:23.362861 | orchestrator | 2026-03-07 00:15:23.362870 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-07 00:15:23.362887 | orchestrator | 2026-03-07 00:15:23.362896 | orchestrator | + echo 2026-03-07 00:15:23.362906 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-07 00:15:23.363918 | orchestrator | ++ export INTERACTIVE=false 2026-03-07 00:15:23.363935 | orchestrator | ++ INTERACTIVE=false 2026-03-07 00:15:23.363946 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-07 00:15:23.363956 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-07 00:15:23.363968 | orchestrator | + source /opt/manager-vars.sh 2026-03-07 00:15:23.363994 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-07 00:15:23.364005 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-07 00:15:23.364014 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-07 00:15:23.364023 | orchestrator | ++ CEPH_VERSION=reef 2026-03-07 00:15:23.364048 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-07 00:15:23.364059 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-07 00:15:23.364088 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-07 00:15:23.364098 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-07 00:15:23.364107 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-07 00:15:23.364125 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-07 00:15:23.364134 | orchestrator | ++ export ARA=false 2026-03-07 00:15:23.364143 | orchestrator | ++ ARA=false 2026-03-07 00:15:23.364151 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-07 00:15:23.364160 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-07 00:15:23.364168 | orchestrator | ++ export TEMPEST=true 2026-03-07 00:15:23.364180 | orchestrator | ++ TEMPEST=true 2026-03-07 00:15:23.364189 | orchestrator | ++ export IS_ZUUL=true 2026-03-07 00:15:23.364216 | orchestrator | ++ IS_ZUUL=true 2026-03-07 00:15:23.364342 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.154 2026-03-07 00:15:23.364356 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.154 2026-03-07 00:15:23.364365 | orchestrator | ++ export EXTERNAL_API=false 2026-03-07 00:15:23.364374 | orchestrator | ++ EXTERNAL_API=false 2026-03-07 00:15:23.364383 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-07 00:15:23.364391 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-07 00:15:23.364400 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-07 00:15:23.364409 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-07 00:15:23.364417 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-07 00:15:23.364426 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-07 00:15:23.364439 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-07 00:15:23.423968 | orchestrator | + docker version 2026-03-07 00:15:23.564108 | orchestrator | Client: Docker Engine - Community 2026-03-07 00:15:23.564217 | orchestrator | Version: 27.5.1 2026-03-07 00:15:23.564233 | orchestrator | API version: 1.47 2026-03-07 00:15:23.564247 | orchestrator | Go version: go1.22.11 2026-03-07 00:15:23.564258 | orchestrator | Git commit: 9f9e405 2026-03-07 00:15:23.564269 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-07 00:15:23.564282 | orchestrator | OS/Arch: linux/amd64 2026-03-07 00:15:23.564293 | orchestrator | Context: default 2026-03-07 00:15:23.564304 | orchestrator | 2026-03-07 00:15:23.564315 | orchestrator | Server: Docker Engine - Community 2026-03-07 00:15:23.564327 | orchestrator | Engine: 2026-03-07 00:15:23.564338 | orchestrator | Version: 27.5.1 2026-03-07 00:15:23.564365 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-07 00:15:23.564408 | orchestrator | Go version: go1.22.11 2026-03-07 00:15:23.564421 | orchestrator | Git commit: 4c9b3b0 2026-03-07 00:15:23.564432 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-07 00:15:23.564443 | orchestrator | OS/Arch: linux/amd64 2026-03-07 00:15:23.564453 | orchestrator | Experimental: false 2026-03-07 00:15:23.564464 | orchestrator | containerd: 2026-03-07 00:15:23.564475 | orchestrator | Version: v2.2.1 2026-03-07 00:15:23.564487 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-07 00:15:23.564498 | orchestrator | runc: 2026-03-07 00:15:23.564509 | orchestrator | Version: 1.3.4 2026-03-07 00:15:23.564520 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-07 00:15:23.564531 | orchestrator | docker-init: 2026-03-07 00:15:23.564542 | orchestrator | Version: 0.19.0 2026-03-07 00:15:23.564554 | orchestrator | GitCommit: de40ad0 2026-03-07 00:15:23.567544 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-07 00:15:23.577763 | orchestrator | + set -e 2026-03-07 00:15:23.577842 | orchestrator | + source /opt/manager-vars.sh 2026-03-07 00:15:23.577854 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-07 00:15:23.577867 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-07 00:15:23.577877 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-07 00:15:23.577887 | orchestrator | ++ CEPH_VERSION=reef 2026-03-07 00:15:23.577897 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-07 00:15:23.577908 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-07 00:15:23.577917 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-07 00:15:23.577927 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-07 00:15:23.577937 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-07 00:15:23.577947 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-07 00:15:23.577957 | orchestrator | ++ export ARA=false 2026-03-07 00:15:23.577967 | orchestrator | ++ ARA=false 2026-03-07 00:15:23.577977 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-07 00:15:23.577987 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-07 00:15:23.577997 | orchestrator | ++ export TEMPEST=true 2026-03-07 00:15:23.578006 | orchestrator | ++ TEMPEST=true 2026-03-07 00:15:23.578087 | orchestrator | ++ export IS_ZUUL=true 2026-03-07 00:15:23.578101 | orchestrator | ++ IS_ZUUL=true 2026-03-07 00:15:23.578111 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.154 2026-03-07 00:15:23.578121 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.154 2026-03-07 00:15:23.578131 | orchestrator | ++ export EXTERNAL_API=false 2026-03-07 00:15:23.578140 | orchestrator | ++ EXTERNAL_API=false 2026-03-07 00:15:23.578150 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-07 00:15:23.578161 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-07 00:15:23.578178 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-07 00:15:23.578194 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-07 00:15:23.578211 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-07 00:15:23.578226 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-07 00:15:23.578242 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-07 00:15:23.578258 | orchestrator | ++ export INTERACTIVE=false 2026-03-07 00:15:23.578274 | orchestrator | ++ INTERACTIVE=false 2026-03-07 00:15:23.578305 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-07 00:15:23.578328 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-07 00:15:23.578345 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-07 00:15:23.578362 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-07 00:15:23.578379 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-07 00:15:23.583409 | orchestrator | + set -e 2026-03-07 00:15:23.583480 | orchestrator | + VERSION=reef 2026-03-07 00:15:23.584007 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-07 00:15:23.590560 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-07 00:15:23.590689 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-07 00:15:23.594778 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-07 00:15:23.600699 | orchestrator | + set -e 2026-03-07 00:15:23.600794 | orchestrator | + VERSION=2024.2 2026-03-07 00:15:23.600873 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-07 00:15:23.603014 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-07 00:15:23.603067 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-07 00:15:23.608300 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-07 00:15:23.608924 | orchestrator | ++ semver latest 7.0.0 2026-03-07 00:15:23.668450 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-07 00:15:23.668667 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-07 00:15:23.668718 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-07 00:15:23.668756 | orchestrator | ++ semver latest 10.0.0-0 2026-03-07 00:15:23.727561 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-07 00:15:23.727877 | orchestrator | ++ semver 2024.2 2025.1 2026-03-07 00:15:23.785737 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-07 00:15:23.785852 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-07 00:15:23.880989 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-07 00:15:23.882632 | orchestrator | + source /opt/venv/bin/activate 2026-03-07 00:15:23.884286 | orchestrator | ++ deactivate nondestructive 2026-03-07 00:15:23.884339 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:15:23.884362 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:15:23.884435 | orchestrator | ++ hash -r 2026-03-07 00:15:23.884550 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:15:23.884572 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-07 00:15:23.884584 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-07 00:15:23.884599 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-07 00:15:23.884649 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-07 00:15:23.884670 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-07 00:15:23.884689 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-07 00:15:23.884709 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-07 00:15:23.884773 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-07 00:15:23.884828 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-07 00:15:23.884848 | orchestrator | ++ export PATH 2026-03-07 00:15:23.884860 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:15:23.884871 | orchestrator | ++ '[' -z '' ']' 2026-03-07 00:15:23.884882 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-07 00:15:23.884893 | orchestrator | ++ PS1='(venv) ' 2026-03-07 00:15:23.884904 | orchestrator | ++ export PS1 2026-03-07 00:15:23.884915 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-07 00:15:23.884927 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-07 00:15:23.884939 | orchestrator | ++ hash -r 2026-03-07 00:15:23.885058 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-07 00:15:25.306167 | orchestrator | 2026-03-07 00:15:25.306295 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-07 00:15:25.306328 | orchestrator | 2026-03-07 00:15:25.306346 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-07 00:15:25.902648 | orchestrator | ok: [testbed-manager] 2026-03-07 00:15:25.902745 | orchestrator | 2026-03-07 00:15:25.902759 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-07 00:15:26.968401 | orchestrator | changed: [testbed-manager] 2026-03-07 00:15:26.968521 | orchestrator | 2026-03-07 00:15:26.968544 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-07 00:15:26.968564 | orchestrator | 2026-03-07 00:15:26.968582 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:15:29.460681 | orchestrator | ok: [testbed-manager] 2026-03-07 00:15:29.460832 | orchestrator | 2026-03-07 00:15:29.460849 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-07 00:15:29.529146 | orchestrator | ok: [testbed-manager] 2026-03-07 00:15:29.529238 | orchestrator | 2026-03-07 00:15:29.529257 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-07 00:15:30.034887 | orchestrator | changed: [testbed-manager] 2026-03-07 00:15:30.035014 | orchestrator | 2026-03-07 00:15:30.035031 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-07 00:15:30.088310 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:15:30.088422 | orchestrator | 2026-03-07 00:15:30.088440 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-07 00:15:30.450193 | orchestrator | changed: [testbed-manager] 2026-03-07 00:15:30.450284 | orchestrator | 2026-03-07 00:15:30.450296 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-07 00:15:30.823948 | orchestrator | ok: [testbed-manager] 2026-03-07 00:15:30.824040 | orchestrator | 2026-03-07 00:15:30.824053 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-07 00:15:30.948133 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:15:30.948235 | orchestrator | 2026-03-07 00:15:30.948251 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-07 00:15:30.948265 | orchestrator | 2026-03-07 00:15:30.948276 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:15:32.806890 | orchestrator | ok: [testbed-manager] 2026-03-07 00:15:32.806997 | orchestrator | 2026-03-07 00:15:32.807014 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-07 00:15:32.920308 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-07 00:15:32.920406 | orchestrator | 2026-03-07 00:15:32.920420 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-07 00:15:32.983964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-07 00:15:32.984062 | orchestrator | 2026-03-07 00:15:32.984078 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-07 00:15:34.141424 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-07 00:15:34.141523 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-07 00:15:34.141539 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-07 00:15:34.141553 | orchestrator | 2026-03-07 00:15:34.141565 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-07 00:15:36.050999 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-07 00:15:36.051141 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-07 00:15:36.051156 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-07 00:15:36.051169 | orchestrator | 2026-03-07 00:15:36.051181 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-07 00:15:36.735037 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:15:36.735142 | orchestrator | changed: [testbed-manager] 2026-03-07 00:15:36.735158 | orchestrator | 2026-03-07 00:15:36.735171 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-07 00:15:37.419234 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:15:37.419355 | orchestrator | changed: [testbed-manager] 2026-03-07 00:15:37.419371 | orchestrator | 2026-03-07 00:15:37.419384 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-07 00:15:37.482094 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:15:37.482216 | orchestrator | 2026-03-07 00:15:37.482243 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-07 00:15:37.876245 | orchestrator | ok: [testbed-manager] 2026-03-07 00:15:37.876348 | orchestrator | 2026-03-07 00:15:37.876364 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-07 00:15:37.957169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-07 00:15:37.957269 | orchestrator | 2026-03-07 00:15:37.957286 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-07 00:15:39.152900 | orchestrator | changed: [testbed-manager] 2026-03-07 00:15:39.153005 | orchestrator | 2026-03-07 00:15:39.153032 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-07 00:15:40.080741 | orchestrator | changed: [testbed-manager] 2026-03-07 00:15:40.080850 | orchestrator | 2026-03-07 00:15:40.080872 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-07 00:15:51.404978 | orchestrator | changed: [testbed-manager] 2026-03-07 00:15:51.405046 | orchestrator | 2026-03-07 00:15:51.405067 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-07 00:15:51.473102 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:15:51.473218 | orchestrator | 2026-03-07 00:15:51.473235 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-07 00:15:51.473248 | orchestrator | 2026-03-07 00:15:51.473260 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:15:53.327873 | orchestrator | ok: [testbed-manager] 2026-03-07 00:15:53.327981 | orchestrator | 2026-03-07 00:15:53.328015 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-07 00:15:53.446340 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-07 00:15:53.446435 | orchestrator | 2026-03-07 00:15:53.446447 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-07 00:15:53.517006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:15:53.517098 | orchestrator | 2026-03-07 00:15:53.517112 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-07 00:15:56.069405 | orchestrator | ok: [testbed-manager] 2026-03-07 00:15:56.069571 | orchestrator | 2026-03-07 00:15:56.069591 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-07 00:15:56.129184 | orchestrator | ok: [testbed-manager] 2026-03-07 00:15:56.129281 | orchestrator | 2026-03-07 00:15:56.129296 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-07 00:15:56.253728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-07 00:15:56.253798 | orchestrator | 2026-03-07 00:15:56.253807 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-07 00:15:59.261114 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-07 00:15:59.261216 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-07 00:15:59.261227 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-07 00:15:59.261236 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-07 00:15:59.261244 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-07 00:15:59.261252 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-07 00:15:59.261260 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-07 00:15:59.261268 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-07 00:15:59.261276 | orchestrator | 2026-03-07 00:15:59.261285 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-07 00:15:59.920620 | orchestrator | changed: [testbed-manager] 2026-03-07 00:15:59.920724 | orchestrator | 2026-03-07 00:15:59.920740 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-07 00:16:00.600865 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:00.600956 | orchestrator | 2026-03-07 00:16:00.600968 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-07 00:16:00.681390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-07 00:16:00.681562 | orchestrator | 2026-03-07 00:16:00.681595 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-07 00:16:01.969659 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-07 00:16:01.969805 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-07 00:16:01.969823 | orchestrator | 2026-03-07 00:16:01.969837 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-07 00:16:02.646600 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:02.646706 | orchestrator | 2026-03-07 00:16:02.646724 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-07 00:16:02.695327 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:16:02.695425 | orchestrator | 2026-03-07 00:16:02.695440 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-07 00:16:02.775965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-07 00:16:02.776064 | orchestrator | 2026-03-07 00:16:02.776080 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-07 00:16:03.431303 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:03.431375 | orchestrator | 2026-03-07 00:16:03.431382 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-07 00:16:03.491461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-07 00:16:03.491621 | orchestrator | 2026-03-07 00:16:03.491637 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-07 00:16:04.897632 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:16:04.897721 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:16:04.897734 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:04.897744 | orchestrator | 2026-03-07 00:16:04.897754 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-07 00:16:05.565431 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:05.565564 | orchestrator | 2026-03-07 00:16:05.565578 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-07 00:16:05.627412 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:16:05.627604 | orchestrator | 2026-03-07 00:16:05.627634 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-07 00:16:05.733069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-07 00:16:05.733171 | orchestrator | 2026-03-07 00:16:05.733186 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-07 00:16:06.284311 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:06.284383 | orchestrator | 2026-03-07 00:16:06.284406 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-07 00:16:06.702523 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:06.702628 | orchestrator | 2026-03-07 00:16:06.702645 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-07 00:16:07.995405 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-07 00:16:07.995548 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-07 00:16:07.995575 | orchestrator | 2026-03-07 00:16:07.995591 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-07 00:16:08.666901 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:08.666999 | orchestrator | 2026-03-07 00:16:08.667016 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-07 00:16:09.073717 | orchestrator | ok: [testbed-manager] 2026-03-07 00:16:09.073898 | orchestrator | 2026-03-07 00:16:09.073928 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-07 00:16:09.478510 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:09.479612 | orchestrator | 2026-03-07 00:16:09.479663 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-07 00:16:09.532453 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:16:09.532575 | orchestrator | 2026-03-07 00:16:09.532592 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-07 00:16:09.614353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-07 00:16:09.614429 | orchestrator | 2026-03-07 00:16:09.614437 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-07 00:16:09.669617 | orchestrator | ok: [testbed-manager] 2026-03-07 00:16:09.669684 | orchestrator | 2026-03-07 00:16:09.669691 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-07 00:16:11.853966 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-07 00:16:11.854124 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-07 00:16:11.854166 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-07 00:16:11.854178 | orchestrator | 2026-03-07 00:16:11.854190 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-07 00:16:12.604554 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:12.604653 | orchestrator | 2026-03-07 00:16:12.604667 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-07 00:16:13.356091 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:13.356201 | orchestrator | 2026-03-07 00:16:13.356217 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-07 00:16:14.102901 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:14.102998 | orchestrator | 2026-03-07 00:16:14.103015 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-07 00:16:14.188835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-07 00:16:14.188916 | orchestrator | 2026-03-07 00:16:14.188928 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-07 00:16:14.238006 | orchestrator | ok: [testbed-manager] 2026-03-07 00:16:14.238085 | orchestrator | 2026-03-07 00:16:14.238091 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-07 00:16:14.981074 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-07 00:16:14.981137 | orchestrator | 2026-03-07 00:16:14.981146 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-07 00:16:15.071650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-07 00:16:15.071730 | orchestrator | 2026-03-07 00:16:15.071739 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-07 00:16:15.830426 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:15.830515 | orchestrator | 2026-03-07 00:16:15.830527 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-07 00:16:16.454983 | orchestrator | ok: [testbed-manager] 2026-03-07 00:16:16.455075 | orchestrator | 2026-03-07 00:16:16.455082 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-07 00:16:16.516017 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:16:16.516114 | orchestrator | 2026-03-07 00:16:16.516129 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-07 00:16:16.577832 | orchestrator | ok: [testbed-manager] 2026-03-07 00:16:16.577923 | orchestrator | 2026-03-07 00:16:16.577937 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-07 00:16:17.435265 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:17.435361 | orchestrator | 2026-03-07 00:16:17.435377 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-07 00:17:32.046209 | orchestrator | changed: [testbed-manager] 2026-03-07 00:17:32.046372 | orchestrator | 2026-03-07 00:17:32.046388 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-07 00:17:33.083449 | orchestrator | ok: [testbed-manager] 2026-03-07 00:17:33.083561 | orchestrator | 2026-03-07 00:17:33.083578 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-07 00:17:33.145579 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:17:33.145672 | orchestrator | 2026-03-07 00:17:33.145687 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-07 00:17:38.041701 | orchestrator | changed: [testbed-manager] 2026-03-07 00:17:38.041906 | orchestrator | 2026-03-07 00:17:38.041928 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-07 00:17:38.149432 | orchestrator | ok: [testbed-manager] 2026-03-07 00:17:38.149547 | orchestrator | 2026-03-07 00:17:38.149587 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-07 00:17:38.149600 | orchestrator | 2026-03-07 00:17:38.149612 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-07 00:17:38.204514 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:17:38.204586 | orchestrator | 2026-03-07 00:17:38.204601 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-07 00:18:38.270592 | orchestrator | Pausing for 60 seconds 2026-03-07 00:18:38.270707 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:38.270723 | orchestrator | 2026-03-07 00:18:38.270736 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-07 00:18:41.431599 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:41.431734 | orchestrator | 2026-03-07 00:18:41.431752 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-07 00:19:43.714228 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-07 00:19:43.714366 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-07 00:19:43.714391 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-07 00:19:43.714452 | orchestrator | changed: [testbed-manager] 2026-03-07 00:19:43.714468 | orchestrator | 2026-03-07 00:19:43.714480 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-07 00:19:55.185511 | orchestrator | changed: [testbed-manager] 2026-03-07 00:19:55.185650 | orchestrator | 2026-03-07 00:19:55.185677 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-07 00:19:55.268780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-07 00:19:55.268901 | orchestrator | 2026-03-07 00:19:55.268917 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-07 00:19:55.268930 | orchestrator | 2026-03-07 00:19:55.268941 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-07 00:19:55.334515 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:19:55.334619 | orchestrator | 2026-03-07 00:19:55.334635 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-07 00:19:55.410220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-07 00:19:55.410345 | orchestrator | 2026-03-07 00:19:55.410359 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-07 00:19:56.229078 | orchestrator | changed: [testbed-manager] 2026-03-07 00:19:56.229210 | orchestrator | 2026-03-07 00:19:56.229229 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-07 00:19:59.765862 | orchestrator | ok: [testbed-manager] 2026-03-07 00:19:59.766004 | orchestrator | 2026-03-07 00:19:59.766071 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-07 00:19:59.846062 | orchestrator | ok: [testbed-manager] => { 2026-03-07 00:19:59.846160 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-07 00:19:59.846173 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-07 00:19:59.846184 | orchestrator | "Checking running containers against expected versions...", 2026-03-07 00:19:59.846194 | orchestrator | "", 2026-03-07 00:19:59.846203 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-07 00:19:59.846212 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-07 00:19:59.846220 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846228 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-07 00:19:59.846237 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846245 | orchestrator | "", 2026-03-07 00:19:59.846254 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-07 00:19:59.846263 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-07 00:19:59.846271 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846280 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-07 00:19:59.846288 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846296 | orchestrator | "", 2026-03-07 00:19:59.846304 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-07 00:19:59.846312 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-07 00:19:59.846321 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846329 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-07 00:19:59.846337 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846345 | orchestrator | "", 2026-03-07 00:19:59.846353 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-07 00:19:59.846362 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-07 00:19:59.846371 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846380 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-07 00:19:59.846389 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846397 | orchestrator | "", 2026-03-07 00:19:59.846406 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-07 00:19:59.846436 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-07 00:19:59.846444 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846453 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-07 00:19:59.846461 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846469 | orchestrator | "", 2026-03-07 00:19:59.846478 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-07 00:19:59.846487 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.846496 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846505 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.846514 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846522 | orchestrator | "", 2026-03-07 00:19:59.846531 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-07 00:19:59.846539 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-07 00:19:59.846548 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846557 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-07 00:19:59.846565 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846573 | orchestrator | "", 2026-03-07 00:19:59.846581 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-07 00:19:59.846590 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-07 00:19:59.846598 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846607 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-07 00:19:59.846621 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846629 | orchestrator | "", 2026-03-07 00:19:59.846637 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-07 00:19:59.846649 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-07 00:19:59.846658 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846666 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-07 00:19:59.846674 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846683 | orchestrator | "", 2026-03-07 00:19:59.846691 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-07 00:19:59.846701 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-07 00:19:59.846710 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846719 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-07 00:19:59.846727 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846735 | orchestrator | "", 2026-03-07 00:19:59.846743 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-07 00:19:59.846752 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.846761 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846770 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.846779 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846788 | orchestrator | "", 2026-03-07 00:19:59.846797 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-07 00:19:59.846807 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.846817 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846826 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.846835 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846845 | orchestrator | "", 2026-03-07 00:19:59.846853 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-07 00:19:59.846861 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.846870 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846879 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.846888 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846897 | orchestrator | "", 2026-03-07 00:19:59.846907 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-07 00:19:59.846915 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.846924 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.846940 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.846948 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.846956 | orchestrator | "", 2026-03-07 00:19:59.846982 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-07 00:19:59.847009 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.847017 | orchestrator | " Enabled: true", 2026-03-07 00:19:59.847025 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-07 00:19:59.847032 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:19:59.847039 | orchestrator | "", 2026-03-07 00:19:59.847046 | orchestrator | "=== Summary ===", 2026-03-07 00:19:59.847054 | orchestrator | "Errors (version mismatches): 0", 2026-03-07 00:19:59.847061 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-07 00:19:59.847069 | orchestrator | "", 2026-03-07 00:19:59.847078 | orchestrator | "✅ All running containers match expected versions!" 2026-03-07 00:19:59.847086 | orchestrator | ] 2026-03-07 00:19:59.847094 | orchestrator | } 2026-03-07 00:19:59.847102 | orchestrator | 2026-03-07 00:19:59.847109 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-07 00:19:59.907250 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:19:59.907357 | orchestrator | 2026-03-07 00:19:59.907373 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:19:59.907387 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-07 00:19:59.907400 | orchestrator | 2026-03-07 00:20:00.006855 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-07 00:20:00.007043 | orchestrator | + deactivate 2026-03-07 00:20:00.007077 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-07 00:20:00.007100 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-07 00:20:00.007120 | orchestrator | + export PATH 2026-03-07 00:20:00.007140 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-07 00:20:00.007161 | orchestrator | + '[' -n '' ']' 2026-03-07 00:20:00.007182 | orchestrator | + hash -r 2026-03-07 00:20:00.007201 | orchestrator | + '[' -n '' ']' 2026-03-07 00:20:00.007222 | orchestrator | + unset VIRTUAL_ENV 2026-03-07 00:20:00.007242 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-07 00:20:00.007262 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-07 00:20:00.007280 | orchestrator | + unset -f deactivate 2026-03-07 00:20:00.007300 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-07 00:20:00.016701 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-07 00:20:00.016819 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-07 00:20:00.016834 | orchestrator | + local max_attempts=60 2026-03-07 00:20:00.016846 | orchestrator | + local name=ceph-ansible 2026-03-07 00:20:00.016858 | orchestrator | + local attempt_num=1 2026-03-07 00:20:00.017805 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:20:00.060188 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:20:00.060294 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-07 00:20:00.060311 | orchestrator | + local max_attempts=60 2026-03-07 00:20:00.060324 | orchestrator | + local name=kolla-ansible 2026-03-07 00:20:00.060335 | orchestrator | + local attempt_num=1 2026-03-07 00:20:00.061037 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-07 00:20:00.103903 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:20:00.104024 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-07 00:20:00.104038 | orchestrator | + local max_attempts=60 2026-03-07 00:20:00.104048 | orchestrator | + local name=osism-ansible 2026-03-07 00:20:00.104058 | orchestrator | + local attempt_num=1 2026-03-07 00:20:00.105250 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-07 00:20:00.143692 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:20:00.143792 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-07 00:20:00.143811 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-07 00:20:00.847369 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-07 00:20:01.065553 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-07 00:20:01.065695 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-07 00:20:01.065712 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-07 00:20:01.065725 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-07 00:20:01.065739 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-07 00:20:01.065751 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-07 00:20:01.065763 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-07 00:20:01.065774 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-07 00:20:01.065803 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-07 00:20:01.065815 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-07 00:20:01.065826 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-07 00:20:01.065837 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-07 00:20:01.065848 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-07 00:20:01.065860 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-07 00:20:01.065871 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-07 00:20:01.065882 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-07 00:20:01.072459 | orchestrator | ++ semver latest 7.0.0 2026-03-07 00:20:01.136648 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-07 00:20:01.136772 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-07 00:20:01.136797 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-07 00:20:01.143722 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-07 00:20:13.385381 | orchestrator | 2026-03-07 00:20:13 | INFO  | Prepare task for execution of resolvconf. 2026-03-07 00:20:13.626511 | orchestrator | 2026-03-07 00:20:13 | INFO  | Task e28df0f3-b296-4a18-a071-a7f936a8dcec (resolvconf) was prepared for execution. 2026-03-07 00:20:13.626643 | orchestrator | 2026-03-07 00:20:13 | INFO  | It takes a moment until task e28df0f3-b296-4a18-a071-a7f936a8dcec (resolvconf) has been started and output is visible here. 2026-03-07 00:20:29.298375 | orchestrator | 2026-03-07 00:20:29.298476 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-07 00:20:29.298489 | orchestrator | 2026-03-07 00:20:29.298497 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:20:29.298503 | orchestrator | Saturday 07 March 2026 00:20:18 +0000 (0:00:00.149) 0:00:00.149 ******** 2026-03-07 00:20:29.298509 | orchestrator | ok: [testbed-manager] 2026-03-07 00:20:29.298517 | orchestrator | 2026-03-07 00:20:29.298524 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-07 00:20:29.298532 | orchestrator | Saturday 07 March 2026 00:20:22 +0000 (0:00:04.987) 0:00:05.137 ******** 2026-03-07 00:20:29.298538 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:20:29.298546 | orchestrator | 2026-03-07 00:20:29.298553 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-07 00:20:29.298560 | orchestrator | Saturday 07 March 2026 00:20:23 +0000 (0:00:00.068) 0:00:05.205 ******** 2026-03-07 00:20:29.298567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-07 00:20:29.298576 | orchestrator | 2026-03-07 00:20:29.298582 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-07 00:20:29.298589 | orchestrator | Saturday 07 March 2026 00:20:23 +0000 (0:00:00.087) 0:00:05.293 ******** 2026-03-07 00:20:29.298596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:20:29.298603 | orchestrator | 2026-03-07 00:20:29.298633 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-07 00:20:29.298645 | orchestrator | Saturday 07 March 2026 00:20:23 +0000 (0:00:00.082) 0:00:05.376 ******** 2026-03-07 00:20:29.298649 | orchestrator | ok: [testbed-manager] 2026-03-07 00:20:29.298653 | orchestrator | 2026-03-07 00:20:29.298657 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-07 00:20:29.298661 | orchestrator | Saturday 07 March 2026 00:20:24 +0000 (0:00:01.189) 0:00:06.565 ******** 2026-03-07 00:20:29.298666 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:20:29.298669 | orchestrator | 2026-03-07 00:20:29.298674 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-07 00:20:29.298678 | orchestrator | Saturday 07 March 2026 00:20:24 +0000 (0:00:00.061) 0:00:06.627 ******** 2026-03-07 00:20:29.298681 | orchestrator | ok: [testbed-manager] 2026-03-07 00:20:29.298685 | orchestrator | 2026-03-07 00:20:29.298689 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-07 00:20:29.298693 | orchestrator | Saturday 07 March 2026 00:20:25 +0000 (0:00:00.528) 0:00:07.155 ******** 2026-03-07 00:20:29.298697 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:20:29.298700 | orchestrator | 2026-03-07 00:20:29.298704 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-07 00:20:29.298710 | orchestrator | Saturday 07 March 2026 00:20:25 +0000 (0:00:00.086) 0:00:07.242 ******** 2026-03-07 00:20:29.298714 | orchestrator | changed: [testbed-manager] 2026-03-07 00:20:29.298717 | orchestrator | 2026-03-07 00:20:29.298721 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-07 00:20:29.298725 | orchestrator | Saturday 07 March 2026 00:20:25 +0000 (0:00:00.560) 0:00:07.802 ******** 2026-03-07 00:20:29.298729 | orchestrator | changed: [testbed-manager] 2026-03-07 00:20:29.298732 | orchestrator | 2026-03-07 00:20:29.298736 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-07 00:20:29.298740 | orchestrator | Saturday 07 March 2026 00:20:26 +0000 (0:00:01.125) 0:00:08.928 ******** 2026-03-07 00:20:29.298744 | orchestrator | ok: [testbed-manager] 2026-03-07 00:20:29.298762 | orchestrator | 2026-03-07 00:20:29.298766 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-07 00:20:29.298770 | orchestrator | Saturday 07 March 2026 00:20:27 +0000 (0:00:01.004) 0:00:09.933 ******** 2026-03-07 00:20:29.298774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-07 00:20:29.298778 | orchestrator | 2026-03-07 00:20:29.298781 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-07 00:20:29.298785 | orchestrator | Saturday 07 March 2026 00:20:27 +0000 (0:00:00.088) 0:00:10.021 ******** 2026-03-07 00:20:29.298789 | orchestrator | changed: [testbed-manager] 2026-03-07 00:20:29.298793 | orchestrator | 2026-03-07 00:20:29.298797 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:20:29.298802 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:20:29.298806 | orchestrator | 2026-03-07 00:20:29.298809 | orchestrator | 2026-03-07 00:20:29.298813 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:20:29.298817 | orchestrator | Saturday 07 March 2026 00:20:29 +0000 (0:00:01.193) 0:00:11.215 ******** 2026-03-07 00:20:29.298820 | orchestrator | =============================================================================== 2026-03-07 00:20:29.298824 | orchestrator | Gathering Facts --------------------------------------------------------- 4.99s 2026-03-07 00:20:29.298828 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2026-03-07 00:20:29.298832 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.19s 2026-03-07 00:20:29.298835 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.13s 2026-03-07 00:20:29.298839 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2026-03-07 00:20:29.298843 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-03-07 00:20:29.298860 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-03-07 00:20:29.298864 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-03-07 00:20:29.298868 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-07 00:20:29.298871 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-03-07 00:20:29.298875 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-07 00:20:29.298879 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-07 00:20:29.298883 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-07 00:20:29.636524 | orchestrator | + osism apply sshconfig 2026-03-07 00:20:41.819639 | orchestrator | 2026-03-07 00:20:41 | INFO  | Prepare task for execution of sshconfig. 2026-03-07 00:20:41.895392 | orchestrator | 2026-03-07 00:20:41 | INFO  | Task 179e30a8-5e5a-4cb4-9c0d-d93bfeb2304d (sshconfig) was prepared for execution. 2026-03-07 00:20:41.895501 | orchestrator | 2026-03-07 00:20:41 | INFO  | It takes a moment until task 179e30a8-5e5a-4cb4-9c0d-d93bfeb2304d (sshconfig) has been started and output is visible here. 2026-03-07 00:20:53.963630 | orchestrator | 2026-03-07 00:20:53.963773 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-07 00:20:53.963794 | orchestrator | 2026-03-07 00:20:53.963806 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-07 00:20:53.963817 | orchestrator | Saturday 07 March 2026 00:20:46 +0000 (0:00:00.165) 0:00:00.165 ******** 2026-03-07 00:20:53.963829 | orchestrator | ok: [testbed-manager] 2026-03-07 00:20:53.963840 | orchestrator | 2026-03-07 00:20:53.963851 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-07 00:20:53.963917 | orchestrator | Saturday 07 March 2026 00:20:46 +0000 (0:00:00.630) 0:00:00.796 ******** 2026-03-07 00:20:53.963929 | orchestrator | changed: [testbed-manager] 2026-03-07 00:20:53.963941 | orchestrator | 2026-03-07 00:20:53.963952 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-07 00:20:53.963963 | orchestrator | Saturday 07 March 2026 00:20:47 +0000 (0:00:00.598) 0:00:01.394 ******** 2026-03-07 00:20:53.963974 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-07 00:20:53.963985 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-07 00:20:53.963996 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-07 00:20:53.964007 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-07 00:20:53.964018 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-07 00:20:53.964031 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-07 00:20:53.964049 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-07 00:20:53.964081 | orchestrator | 2026-03-07 00:20:53.964100 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-07 00:20:53.964118 | orchestrator | Saturday 07 March 2026 00:20:53 +0000 (0:00:05.647) 0:00:07.041 ******** 2026-03-07 00:20:53.964135 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:20:53.964153 | orchestrator | 2026-03-07 00:20:53.964172 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-07 00:20:53.964191 | orchestrator | Saturday 07 March 2026 00:20:53 +0000 (0:00:00.071) 0:00:07.114 ******** 2026-03-07 00:20:53.964212 | orchestrator | changed: [testbed-manager] 2026-03-07 00:20:53.964231 | orchestrator | 2026-03-07 00:20:53.964249 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:20:53.964263 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:20:53.964277 | orchestrator | 2026-03-07 00:20:53.964290 | orchestrator | 2026-03-07 00:20:53.964300 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:20:53.964311 | orchestrator | Saturday 07 March 2026 00:20:53 +0000 (0:00:00.575) 0:00:07.689 ******** 2026-03-07 00:20:53.964322 | orchestrator | =============================================================================== 2026-03-07 00:20:53.964332 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.65s 2026-03-07 00:20:53.964343 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.63s 2026-03-07 00:20:53.964353 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.60s 2026-03-07 00:20:53.964364 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2026-03-07 00:20:53.964375 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-03-07 00:20:54.226662 | orchestrator | + osism apply known-hosts 2026-03-07 00:21:06.312657 | orchestrator | 2026-03-07 00:21:06 | INFO  | Prepare task for execution of known-hosts. 2026-03-07 00:21:06.378125 | orchestrator | 2026-03-07 00:21:06 | INFO  | Task 9127088a-73ca-4798-9b19-6356fe9391ca (known-hosts) was prepared for execution. 2026-03-07 00:21:06.378209 | orchestrator | 2026-03-07 00:21:06 | INFO  | It takes a moment until task 9127088a-73ca-4798-9b19-6356fe9391ca (known-hosts) has been started and output is visible here. 2026-03-07 00:21:22.686643 | orchestrator | 2026-03-07 00:21:22.686785 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-07 00:21:22.686871 | orchestrator | 2026-03-07 00:21:22.686886 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-07 00:21:22.686899 | orchestrator | Saturday 07 March 2026 00:21:10 +0000 (0:00:00.176) 0:00:00.176 ******** 2026-03-07 00:21:22.686911 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-07 00:21:22.686923 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-07 00:21:22.686956 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-07 00:21:22.686968 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-07 00:21:22.686979 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-07 00:21:22.686989 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-07 00:21:22.687000 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-07 00:21:22.687011 | orchestrator | 2026-03-07 00:21:22.687023 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-07 00:21:22.687035 | orchestrator | Saturday 07 March 2026 00:21:16 +0000 (0:00:06.077) 0:00:06.254 ******** 2026-03-07 00:21:22.687060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-07 00:21:22.687074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-07 00:21:22.687086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-07 00:21:22.687097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-07 00:21:22.687108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-07 00:21:22.687119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-07 00:21:22.687132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-07 00:21:22.687145 | orchestrator | 2026-03-07 00:21:22.687159 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:22.687172 | orchestrator | Saturday 07 March 2026 00:21:16 +0000 (0:00:00.175) 0:00:06.430 ******** 2026-03-07 00:21:22.687188 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9KPh2Ey38R98xzKdprcXuuaRTH17SjcJexyBE79lqa/3GtfVvPPQN0OqxqVVZBTU5036fDa+53sgLV9LWK/Z6rVErU5TCbGNhJaJ167Nj0lsc5X1sSmm+Oc7oqDysa4jf1VhzD7DrqPaSUYI3pWs5k4YyHS25oj+8sZrdwVOmnVIRmUVu6dx35hmvztnG7TheSIqEKm/Act2naWZr3q93aDBXmEH+Xk/BhTQbtreZE5+PBheFB+CbstKTmjkRsj2EB2YGa4+3ANSuDOQmbICJ2my2O+kwtFdLLYNZLwefx0d7z1prsoRtht6LjbDQ1DNi9IDFisi/kHnw9XbNtc5qhfIo3s2okEraAETzDllzQpDW5hRWl4/rpNzxJi1FjrVSvxa1pNcQGLYwG4RkJbcg1/I4nJp5nM8OVtI2avXISxD+s8o5hAcckKkzJcmlvfUWr8SjQdDkkshrqol6/USnEdYVZe3CtPImZjDl8TAGf5HCXXn+1k52N82jxCvbHCc=) 2026-03-07 00:21:22.687206 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKUvLi1fjxFnGLLKhfSpYa+8QqJajNBRlsQ/CK5z2so7Dp1woFxhkIRmPqWBpT4kPVyIYgOr0B1EskX7ooFAh2o=) 2026-03-07 00:21:22.687222 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFL5AqqG3XUSjde77rcJ/fc9A3qrr4u+HfcKqee5wHRZ) 2026-03-07 00:21:22.687236 | orchestrator | 2026-03-07 00:21:22.687249 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:22.687261 | orchestrator | Saturday 07 March 2026 00:21:17 +0000 (0:00:01.224) 0:00:07.655 ******** 2026-03-07 00:21:22.687274 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII6Z7IEdjFMNzF08yb0kqNUkUsmBeFBy/8J1Dqemknp9) 2026-03-07 00:21:22.687320 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3WzCUpOTYro+3Ny1bOs1dt1VZ6fEKwl09fZEy/LASODR+VX0KdK0Wt1QJAJLniJwuZLljSkApjB/FqhQdxCYYthR5vGVJEf2RPbFYCdG8hcL++nhGNf7/nwXBvNQTBfJhXX4FpWZIbhpMdOFPlmzW1i/vFF9JQVpdLZxeA5TvbkS4mBrAKlBaGiG8r9mcimm7eiCxMhxRol0+uNN+vQEysqjQ8w7oOu3Y+l6IzRAMM9VhWuul8YHj6S2kBuTAxYvYqPrK/0loJkzWT6GAY/8dSQ8FE/Hk3b7GJBFcBtFuUrTostN7hXG8lIt+r6h6nFYVRki6jNuC710vDIssVtOH2aR4+ir1VkHl0tEOCj7P4qbHHsEAAiSFSCyLnRVdA6l9glxlaHTqHp5TlN4D8xQ5XRkmG8YXRupAodYfrSqSQTL3CAZymufuccs42IexcvzUvh+hWJLBd52Lhuz/fRPH84UcDABJ5a1OllRwBIzTl6tQ+vh3p5Xfj2p9NW2NkCE=) 2026-03-07 00:21:22.687342 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCKfBNzmixol8E9/bnZblDzdB+1NzcFkHosEJMJp3Lke/c48DYMG25CLT84GJpP/8Hav5ZYKhZjhR82aWkEfVxI=) 2026-03-07 00:21:22.687354 | orchestrator | 2026-03-07 00:21:22.687365 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:22.687376 | orchestrator | Saturday 07 March 2026 00:21:18 +0000 (0:00:01.138) 0:00:08.793 ******** 2026-03-07 00:21:22.687388 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaGQaA5Za3lGIEbkAVh6UnwKfcD7yKDnOCLS2v3qti7DmweYTxdZrkOghmELWClqzvsU03CmVoYY71oDy9CUnRxMSrin+FzWefErTwkZvwklWdhXdkoExcjMvorAiu2sct3ui2sRpWmOUwIct54mkRjXxeDPPb3k2D3D4GsGfW/jzaoE+16R3vF2QDt9kl3OMOVe4KgE6aBCTJ5qs/bbdHOPBHBsnfgjh5OP+YqOMst+AZgWdibVkTWndjiDpx5PP3mZzRSOW0ghusbbUSp6lgqq/FafKPgw8UbfGb7l1InRyFeg2k27Qg5V1YWWmD2IwEXUreTS304XJvIEbOc1ZWz1oXNP3tpAfJzL16Lh+NWVQA5ccAnb/VJwKKMenjyrBG8I7nViq28mYyVJA0BVvTPBZ9ujB1Gl26M8PzMLoJ4HFuCOWwsBHEsGZqn2lAd9IA9HnZ2U7i24lTlVqoNGl79dPJtRfITobi2C6FILOqnSVlmrQmg/qI8zp/wwzvIw8=) 2026-03-07 00:21:22.687403 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKNSZbl8p1MFVfUGEaqfAfTMalg7b+kTQLBiDNK0jms9) 2026-03-07 00:21:22.687511 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHcckFWkRO3wDmYXoRr5jazV6CcRrivpjenzfJhjyeFYjbjDY2evQGVdh4aCgNIL5q4ZvJF3dX6qk4N1KXOCw8Q=) 2026-03-07 00:21:22.687537 | orchestrator | 2026-03-07 00:21:22.687557 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:22.687576 | orchestrator | Saturday 07 March 2026 00:21:20 +0000 (0:00:01.140) 0:00:09.934 ******** 2026-03-07 00:21:22.687594 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFkcD6wNHoIm+t+Ji1ivn1k7x/nhcPWp/RM0bxU8qtRpyDvaYEtGhVU59lOH4Hvlv+fgGumqP05OU2WH9UfaWjg=) 2026-03-07 00:21:22.687607 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCd19rgot6XAbO3lZkQWcsFvQVO4KVCqXUMz2PkHph6ko7VRCjHGmH1/lmiI9jLoFL0RWy+dEvroEt8RLVphTd4ZZ1mokys+pzWU8CVGVjBxKJNLiWgXojdT2S/E3Du0TrzELfp7x4bezBpTZdCKPUDKFL2NWsI9RRVNpUXN3LQgBgvS0kYUNBqUBcWncUp+Ct9fo/OPzdMb0kw5p3z5XClVxDiOxe56uW3T5FFFIiJQvS0OnJYZPmb7YoVuoYiGJhE27RGtXpOQ80TRZX9rkJswV/yIJng9A/MWBTeIqAA/4dCJ5T7cs60cANlqKymv0xCmLVDQHxmqdgr2HzQr43tIoLEt+1iuudcBQiSxR+uFC1RcZcx2Vb+Wa2gsGQ7nZJHDvL8M9xoVhAlP8OvnQ7L5EK2rf+EAu1atO9K4KBx+HCjr2mZFGSENpUaeayJBiDB7xa1CiXdrh6x3ieTUNN47ZRTGLZ9AJ/R9nk5RccJ10WlU977iBPWNpPh3t00y5M=) 2026-03-07 00:21:22.687618 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM5PTgtBfy8dgykERKaLQ/LOYsakL/6fX7ev+83CxJ8S) 2026-03-07 00:21:22.687629 | orchestrator | 2026-03-07 00:21:22.687640 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:22.687651 | orchestrator | Saturday 07 March 2026 00:21:21 +0000 (0:00:01.121) 0:00:11.056 ******** 2026-03-07 00:21:22.687662 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB535FA0o42Br0XHr5mCjpK9r3IXotcsz9k8M2dO8QmU8CI9BsQHTu6zjJFsUmFU7SPWJesoKVdI1BXO9CS76G0=) 2026-03-07 00:21:22.687673 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILBGY2GpfHyIWZ3zPCrYl6feQIldOazzoa8Ris0Wk8sF) 2026-03-07 00:21:22.687693 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgrI4EVWFO5NqTUiDhOEBGqjwJwUe41iFa8dsaF9Ts0sYQKSzV9EVkE5cvAtiQ7HpcP3kN65BM9JjsW4h05Bq7M+Q7Vhlmp5MlILXpawT+h3rDoGkiYfBe88d8sNZ2WSKEPvu3MvMABwdIpFj+eSLacuvdXHMuqFVTj8xDIf12Gp86nXRR7voTLckhgxeJKbQ5FKUeaVEy60rQZ2Qg6ymtA+4o7ZHGlJxAyXsBanohxtuJEBp56W9yXNo1Dc7dv/b583CyxwNDn9NPaneVHFt9wmy76VHl3YYjMfhs86vNQt4NWmVkQy6a5WFdmw9G3H2GgjZgoy1ihgT4TOTu1GutD86esfIH4JwdJRoowfy7wtKGhiavSzpaMQxIUmasre4RTPwAL311ee9Y4q/EnJEalkoUG6HZz3ENYxtTHUojOu4RQPg+aKRnQy+4L6MF//NDpcWxxFkdxCWZiUwV5T9+OMW6OnvGCi9yg0H4LWhSrMGRFL7bvmK7/cCnGkdVbvE=) 2026-03-07 00:21:22.687704 | orchestrator | 2026-03-07 00:21:22.687716 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:22.687730 | orchestrator | Saturday 07 March 2026 00:21:22 +0000 (0:00:01.067) 0:00:12.123 ******** 2026-03-07 00:21:22.687760 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGqk7hq9gMUG5LrHyjxy3zsw0XrqSNnJbTV48rm6qsVgAEMQzCvdUJMK5szBKCQ1nux/9jfhb3ruP5RZYtHPGD4=) 2026-03-07 00:21:34.148210 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMIwCSt41sh0jd9ApJ4oTGl/kXXtXOly9eIUSNRDklui) 2026-03-07 00:21:34.148337 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCw4Z2XRdvIXUjUSzmARvHdgdJTT4ku1apTct/KQslZ6sRHTRrIQUxGGv2siL3X9gcgatVdEsMpRjPPVQfpQfAuxKIe6DgZCu9Y+vG/qxkoUeSsbqcmsSrH2iuIoqbVmYCVYxj6sXiVNNcKCmYOTnTD+5dSd4pc0z1hyW7Lz4d1le0cLVMf2zS3J4NPZIZCU1z9t2mbMvHG3uxAcP1J+z4JEpqKsl2wiwuKOJwdkNxCXvr/Hox6gTAHsqr2BJS3v+wfXg1p3P9x6TDtTIi+pot9DOxisHVq5V8wckqHHB4LNNQ3ZlWj30Arth/FHakPwBBCV1kDlUxJgw+g5aWyW/kmc6TpN4xBxmS9nX8b6sNvy3vhHb/IKDJUUqveUljmL1NPW5aHCkt2RRvENgeABBhynDz/d+MMjip3tCmm81dzVsEzaKwdfcUpBsqowcU+yBSyuixhGwgGuuySMR0HgxvqA4eBCfczuv1AJ+tv9Od/7tX5JGvOXYP9EEvNUL8opjk=) 2026-03-07 00:21:34.148356 | orchestrator | 2026-03-07 00:21:34.148370 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:34.148383 | orchestrator | Saturday 07 March 2026 00:21:23 +0000 (0:00:01.126) 0:00:13.250 ******** 2026-03-07 00:21:34.148394 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO+KuBcmFQGjCyPde0cHH41CEgqRSCtQ18335iPIrSnn) 2026-03-07 00:21:34.148407 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1b7WT8yTj1ux1bEEmqQM6p+QvpDpUKOeE7wp+RKNtpmzWobTdS+BCC0hnDv6YgLWxu2jAbsBu84JJS0oDTWABwXwWou8WiEmO6Zye+16VMKfFMTwW/0+QkY296QGrtVQl5iMk643VwLHUDFBw4w9GAjynz/4xHEKYZahimC/VSUhq2j9XLN7I3bwGJzoJLkWDRwexrIVjmeGg3ksaGgrxUxu+DzsoCK3b0in/SnRM0of8ZnL/vA5LVJaSqoCffnVSKyXrqREAch1gQLRgMz3HFLMD6v07t19Imso+QlpndZNyjwFuouDKS9q/lB5GMm3J1HOXnAHy+2NKAK5tU7x4r6Jn6JkxKO9k18z9hl+cxD+XZDLXGfj8O/kavtDxbUmyp6QTQ7iYZLKoS4MBgjZad4Qpaq04J6sQ3avAwpSOCXqol+r4Qcf9qTv297wPOjty3jwmmd1ut46vwLYMKv5TERha16XPcrU7D08c2G99rvJKZ8f3pQU6QG8atL0r7VE=) 2026-03-07 00:21:34.148419 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBINkhUMBrVStwBh9hyMWFoj4EsWka1F9gU1pj2X/qXZfj+YqEmwvy9enKId/aCpb3L9IlggCryM+OEnpVrL3DNc=) 2026-03-07 00:21:34.148432 | orchestrator | 2026-03-07 00:21:34.148444 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-07 00:21:34.148456 | orchestrator | Saturday 07 March 2026 00:21:24 +0000 (0:00:01.134) 0:00:14.384 ******** 2026-03-07 00:21:34.148468 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-07 00:21:34.148480 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-07 00:21:34.148491 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-07 00:21:34.148502 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-07 00:21:34.148513 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-07 00:21:34.148544 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-07 00:21:34.148581 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-07 00:21:34.148593 | orchestrator | 2026-03-07 00:21:34.148604 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-07 00:21:34.148616 | orchestrator | Saturday 07 March 2026 00:21:29 +0000 (0:00:05.371) 0:00:19.756 ******** 2026-03-07 00:21:34.148628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-07 00:21:34.148641 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-07 00:21:34.148652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-07 00:21:34.148663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-07 00:21:34.148674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-07 00:21:34.148685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-07 00:21:34.148696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-07 00:21:34.148709 | orchestrator | 2026-03-07 00:21:34.148739 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:34.148753 | orchestrator | Saturday 07 March 2026 00:21:30 +0000 (0:00:00.179) 0:00:19.935 ******** 2026-03-07 00:21:34.148769 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9KPh2Ey38R98xzKdprcXuuaRTH17SjcJexyBE79lqa/3GtfVvPPQN0OqxqVVZBTU5036fDa+53sgLV9LWK/Z6rVErU5TCbGNhJaJ167Nj0lsc5X1sSmm+Oc7oqDysa4jf1VhzD7DrqPaSUYI3pWs5k4YyHS25oj+8sZrdwVOmnVIRmUVu6dx35hmvztnG7TheSIqEKm/Act2naWZr3q93aDBXmEH+Xk/BhTQbtreZE5+PBheFB+CbstKTmjkRsj2EB2YGa4+3ANSuDOQmbICJ2my2O+kwtFdLLYNZLwefx0d7z1prsoRtht6LjbDQ1DNi9IDFisi/kHnw9XbNtc5qhfIo3s2okEraAETzDllzQpDW5hRWl4/rpNzxJi1FjrVSvxa1pNcQGLYwG4RkJbcg1/I4nJp5nM8OVtI2avXISxD+s8o5hAcckKkzJcmlvfUWr8SjQdDkkshrqol6/USnEdYVZe3CtPImZjDl8TAGf5HCXXn+1k52N82jxCvbHCc=) 2026-03-07 00:21:34.148809 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKUvLi1fjxFnGLLKhfSpYa+8QqJajNBRlsQ/CK5z2so7Dp1woFxhkIRmPqWBpT4kPVyIYgOr0B1EskX7ooFAh2o=) 2026-03-07 00:21:34.148823 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFL5AqqG3XUSjde77rcJ/fc9A3qrr4u+HfcKqee5wHRZ) 2026-03-07 00:21:34.148835 | orchestrator | 2026-03-07 00:21:34.148847 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:34.148860 | orchestrator | Saturday 07 March 2026 00:21:31 +0000 (0:00:01.139) 0:00:21.075 ******** 2026-03-07 00:21:34.148873 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3WzCUpOTYro+3Ny1bOs1dt1VZ6fEKwl09fZEy/LASODR+VX0KdK0Wt1QJAJLniJwuZLljSkApjB/FqhQdxCYYthR5vGVJEf2RPbFYCdG8hcL++nhGNf7/nwXBvNQTBfJhXX4FpWZIbhpMdOFPlmzW1i/vFF9JQVpdLZxeA5TvbkS4mBrAKlBaGiG8r9mcimm7eiCxMhxRol0+uNN+vQEysqjQ8w7oOu3Y+l6IzRAMM9VhWuul8YHj6S2kBuTAxYvYqPrK/0loJkzWT6GAY/8dSQ8FE/Hk3b7GJBFcBtFuUrTostN7hXG8lIt+r6h6nFYVRki6jNuC710vDIssVtOH2aR4+ir1VkHl0tEOCj7P4qbHHsEAAiSFSCyLnRVdA6l9glxlaHTqHp5TlN4D8xQ5XRkmG8YXRupAodYfrSqSQTL3CAZymufuccs42IexcvzUvh+hWJLBd52Lhuz/fRPH84UcDABJ5a1OllRwBIzTl6tQ+vh3p5Xfj2p9NW2NkCE=) 2026-03-07 00:21:34.148895 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCKfBNzmixol8E9/bnZblDzdB+1NzcFkHosEJMJp3Lke/c48DYMG25CLT84GJpP/8Hav5ZYKhZjhR82aWkEfVxI=) 2026-03-07 00:21:34.148908 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII6Z7IEdjFMNzF08yb0kqNUkUsmBeFBy/8J1Dqemknp9) 2026-03-07 00:21:34.148921 | orchestrator | 2026-03-07 00:21:34.148934 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:34.148946 | orchestrator | Saturday 07 March 2026 00:21:32 +0000 (0:00:01.069) 0:00:22.144 ******** 2026-03-07 00:21:34.148959 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHcckFWkRO3wDmYXoRr5jazV6CcRrivpjenzfJhjyeFYjbjDY2evQGVdh4aCgNIL5q4ZvJF3dX6qk4N1KXOCw8Q=) 2026-03-07 00:21:34.148972 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaGQaA5Za3lGIEbkAVh6UnwKfcD7yKDnOCLS2v3qti7DmweYTxdZrkOghmELWClqzvsU03CmVoYY71oDy9CUnRxMSrin+FzWefErTwkZvwklWdhXdkoExcjMvorAiu2sct3ui2sRpWmOUwIct54mkRjXxeDPPb3k2D3D4GsGfW/jzaoE+16R3vF2QDt9kl3OMOVe4KgE6aBCTJ5qs/bbdHOPBHBsnfgjh5OP+YqOMst+AZgWdibVkTWndjiDpx5PP3mZzRSOW0ghusbbUSp6lgqq/FafKPgw8UbfGb7l1InRyFeg2k27Qg5V1YWWmD2IwEXUreTS304XJvIEbOc1ZWz1oXNP3tpAfJzL16Lh+NWVQA5ccAnb/VJwKKMenjyrBG8I7nViq28mYyVJA0BVvTPBZ9ujB1Gl26M8PzMLoJ4HFuCOWwsBHEsGZqn2lAd9IA9HnZ2U7i24lTlVqoNGl79dPJtRfITobi2C6FILOqnSVlmrQmg/qI8zp/wwzvIw8=) 2026-03-07 00:21:34.148985 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKNSZbl8p1MFVfUGEaqfAfTMalg7b+kTQLBiDNK0jms9) 2026-03-07 00:21:34.148997 | orchestrator | 2026-03-07 00:21:34.149010 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:34.149022 | orchestrator | Saturday 07 March 2026 00:21:33 +0000 (0:00:01.122) 0:00:23.267 ******** 2026-03-07 00:21:34.149035 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM5PTgtBfy8dgykERKaLQ/LOYsakL/6fX7ev+83CxJ8S) 2026-03-07 00:21:34.149070 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCd19rgot6XAbO3lZkQWcsFvQVO4KVCqXUMz2PkHph6ko7VRCjHGmH1/lmiI9jLoFL0RWy+dEvroEt8RLVphTd4ZZ1mokys+pzWU8CVGVjBxKJNLiWgXojdT2S/E3Du0TrzELfp7x4bezBpTZdCKPUDKFL2NWsI9RRVNpUXN3LQgBgvS0kYUNBqUBcWncUp+Ct9fo/OPzdMb0kw5p3z5XClVxDiOxe56uW3T5FFFIiJQvS0OnJYZPmb7YoVuoYiGJhE27RGtXpOQ80TRZX9rkJswV/yIJng9A/MWBTeIqAA/4dCJ5T7cs60cANlqKymv0xCmLVDQHxmqdgr2HzQr43tIoLEt+1iuudcBQiSxR+uFC1RcZcx2Vb+Wa2gsGQ7nZJHDvL8M9xoVhAlP8OvnQ7L5EK2rf+EAu1atO9K4KBx+HCjr2mZFGSENpUaeayJBiDB7xa1CiXdrh6x3ieTUNN47ZRTGLZ9AJ/R9nk5RccJ10WlU977iBPWNpPh3t00y5M=) 2026-03-07 00:21:39.090921 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFkcD6wNHoIm+t+Ji1ivn1k7x/nhcPWp/RM0bxU8qtRpyDvaYEtGhVU59lOH4Hvlv+fgGumqP05OU2WH9UfaWjg=) 2026-03-07 00:21:39.091067 | orchestrator | 2026-03-07 00:21:39.091096 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:39.091119 | orchestrator | Saturday 07 March 2026 00:21:34 +0000 (0:00:01.095) 0:00:24.363 ******** 2026-03-07 00:21:39.091141 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILBGY2GpfHyIWZ3zPCrYl6feQIldOazzoa8Ris0Wk8sF) 2026-03-07 00:21:39.091165 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgrI4EVWFO5NqTUiDhOEBGqjwJwUe41iFa8dsaF9Ts0sYQKSzV9EVkE5cvAtiQ7HpcP3kN65BM9JjsW4h05Bq7M+Q7Vhlmp5MlILXpawT+h3rDoGkiYfBe88d8sNZ2WSKEPvu3MvMABwdIpFj+eSLacuvdXHMuqFVTj8xDIf12Gp86nXRR7voTLckhgxeJKbQ5FKUeaVEy60rQZ2Qg6ymtA+4o7ZHGlJxAyXsBanohxtuJEBp56W9yXNo1Dc7dv/b583CyxwNDn9NPaneVHFt9wmy76VHl3YYjMfhs86vNQt4NWmVkQy6a5WFdmw9G3H2GgjZgoy1ihgT4TOTu1GutD86esfIH4JwdJRoowfy7wtKGhiavSzpaMQxIUmasre4RTPwAL311ee9Y4q/EnJEalkoUG6HZz3ENYxtTHUojOu4RQPg+aKRnQy+4L6MF//NDpcWxxFkdxCWZiUwV5T9+OMW6OnvGCi9yg0H4LWhSrMGRFL7bvmK7/cCnGkdVbvE=) 2026-03-07 00:21:39.091224 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB535FA0o42Br0XHr5mCjpK9r3IXotcsz9k8M2dO8QmU8CI9BsQHTu6zjJFsUmFU7SPWJesoKVdI1BXO9CS76G0=) 2026-03-07 00:21:39.091247 | orchestrator | 2026-03-07 00:21:39.091288 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:39.091309 | orchestrator | Saturday 07 March 2026 00:21:35 +0000 (0:00:01.102) 0:00:25.466 ******** 2026-03-07 00:21:39.091331 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCw4Z2XRdvIXUjUSzmARvHdgdJTT4ku1apTct/KQslZ6sRHTRrIQUxGGv2siL3X9gcgatVdEsMpRjPPVQfpQfAuxKIe6DgZCu9Y+vG/qxkoUeSsbqcmsSrH2iuIoqbVmYCVYxj6sXiVNNcKCmYOTnTD+5dSd4pc0z1hyW7Lz4d1le0cLVMf2zS3J4NPZIZCU1z9t2mbMvHG3uxAcP1J+z4JEpqKsl2wiwuKOJwdkNxCXvr/Hox6gTAHsqr2BJS3v+wfXg1p3P9x6TDtTIi+pot9DOxisHVq5V8wckqHHB4LNNQ3ZlWj30Arth/FHakPwBBCV1kDlUxJgw+g5aWyW/kmc6TpN4xBxmS9nX8b6sNvy3vhHb/IKDJUUqveUljmL1NPW5aHCkt2RRvENgeABBhynDz/d+MMjip3tCmm81dzVsEzaKwdfcUpBsqowcU+yBSyuixhGwgGuuySMR0HgxvqA4eBCfczuv1AJ+tv9Od/7tX5JGvOXYP9EEvNUL8opjk=) 2026-03-07 00:21:39.091353 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGqk7hq9gMUG5LrHyjxy3zsw0XrqSNnJbTV48rm6qsVgAEMQzCvdUJMK5szBKCQ1nux/9jfhb3ruP5RZYtHPGD4=) 2026-03-07 00:21:39.091375 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMIwCSt41sh0jd9ApJ4oTGl/kXXtXOly9eIUSNRDklui) 2026-03-07 00:21:39.091396 | orchestrator | 2026-03-07 00:21:39.091419 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:21:39.091441 | orchestrator | Saturday 07 March 2026 00:21:36 +0000 (0:00:01.128) 0:00:26.594 ******** 2026-03-07 00:21:39.091461 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBINkhUMBrVStwBh9hyMWFoj4EsWka1F9gU1pj2X/qXZfj+YqEmwvy9enKId/aCpb3L9IlggCryM+OEnpVrL3DNc=) 2026-03-07 00:21:39.091482 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1b7WT8yTj1ux1bEEmqQM6p+QvpDpUKOeE7wp+RKNtpmzWobTdS+BCC0hnDv6YgLWxu2jAbsBu84JJS0oDTWABwXwWou8WiEmO6Zye+16VMKfFMTwW/0+QkY296QGrtVQl5iMk643VwLHUDFBw4w9GAjynz/4xHEKYZahimC/VSUhq2j9XLN7I3bwGJzoJLkWDRwexrIVjmeGg3ksaGgrxUxu+DzsoCK3b0in/SnRM0of8ZnL/vA5LVJaSqoCffnVSKyXrqREAch1gQLRgMz3HFLMD6v07t19Imso+QlpndZNyjwFuouDKS9q/lB5GMm3J1HOXnAHy+2NKAK5tU7x4r6Jn6JkxKO9k18z9hl+cxD+XZDLXGfj8O/kavtDxbUmyp6QTQ7iYZLKoS4MBgjZad4Qpaq04J6sQ3avAwpSOCXqol+r4Qcf9qTv297wPOjty3jwmmd1ut46vwLYMKv5TERha16XPcrU7D08c2G99rvJKZ8f3pQU6QG8atL0r7VE=) 2026-03-07 00:21:39.091503 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO+KuBcmFQGjCyPde0cHH41CEgqRSCtQ18335iPIrSnn) 2026-03-07 00:21:39.091522 | orchestrator | 2026-03-07 00:21:39.091541 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-07 00:21:39.091562 | orchestrator | Saturday 07 March 2026 00:21:37 +0000 (0:00:01.108) 0:00:27.703 ******** 2026-03-07 00:21:39.091580 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-07 00:21:39.091600 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-07 00:21:39.091618 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-07 00:21:39.091638 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-07 00:21:39.091686 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-07 00:21:39.091707 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-07 00:21:39.091727 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-07 00:21:39.091747 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:21:39.091804 | orchestrator | 2026-03-07 00:21:39.091826 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-07 00:21:39.091844 | orchestrator | Saturday 07 March 2026 00:21:38 +0000 (0:00:00.160) 0:00:27.863 ******** 2026-03-07 00:21:39.091880 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:21:39.091900 | orchestrator | 2026-03-07 00:21:39.091918 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-07 00:21:39.091937 | orchestrator | Saturday 07 March 2026 00:21:38 +0000 (0:00:00.064) 0:00:27.928 ******** 2026-03-07 00:21:39.091955 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:21:39.091974 | orchestrator | 2026-03-07 00:21:39.091993 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-07 00:21:39.092011 | orchestrator | Saturday 07 March 2026 00:21:38 +0000 (0:00:00.055) 0:00:27.983 ******** 2026-03-07 00:21:39.092029 | orchestrator | changed: [testbed-manager] 2026-03-07 00:21:39.092047 | orchestrator | 2026-03-07 00:21:39.092066 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:21:39.092086 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:21:39.092107 | orchestrator | 2026-03-07 00:21:39.092125 | orchestrator | 2026-03-07 00:21:39.092144 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:21:39.092163 | orchestrator | Saturday 07 March 2026 00:21:38 +0000 (0:00:00.738) 0:00:28.722 ******** 2026-03-07 00:21:39.092181 | orchestrator | =============================================================================== 2026-03-07 00:21:39.092199 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.08s 2026-03-07 00:21:39.092218 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.37s 2026-03-07 00:21:39.092238 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-03-07 00:21:39.092255 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-07 00:21:39.092273 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-07 00:21:39.092291 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-07 00:21:39.092310 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-07 00:21:39.092327 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-07 00:21:39.092348 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-07 00:21:39.092366 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-07 00:21:39.092384 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-07 00:21:39.092403 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-03-07 00:21:39.092420 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-07 00:21:39.092450 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-07 00:21:39.092467 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-07 00:21:39.092487 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-07 00:21:39.092506 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.74s 2026-03-07 00:21:39.092525 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-03-07 00:21:39.092544 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-03-07 00:21:39.092562 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-03-07 00:21:39.417563 | orchestrator | + osism apply squid 2026-03-07 00:21:51.560540 | orchestrator | 2026-03-07 00:21:51 | INFO  | Prepare task for execution of squid. 2026-03-07 00:21:51.636369 | orchestrator | 2026-03-07 00:21:51 | INFO  | Task 33916ce0-4c24-4ddf-8311-198703d16057 (squid) was prepared for execution. 2026-03-07 00:21:51.636454 | orchestrator | 2026-03-07 00:21:51 | INFO  | It takes a moment until task 33916ce0-4c24-4ddf-8311-198703d16057 (squid) has been started and output is visible here. 2026-03-07 00:23:54.088396 | orchestrator | 2026-03-07 00:23:54.088499 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-07 00:23:54.088513 | orchestrator | 2026-03-07 00:23:54.088524 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-07 00:23:54.088534 | orchestrator | Saturday 07 March 2026 00:21:56 +0000 (0:00:00.176) 0:00:00.176 ******** 2026-03-07 00:23:54.088587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:23:54.088598 | orchestrator | 2026-03-07 00:23:54.088607 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-07 00:23:54.088616 | orchestrator | Saturday 07 March 2026 00:21:56 +0000 (0:00:00.100) 0:00:00.277 ******** 2026-03-07 00:23:54.088625 | orchestrator | ok: [testbed-manager] 2026-03-07 00:23:54.088635 | orchestrator | 2026-03-07 00:23:54.088644 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-07 00:23:54.088653 | orchestrator | Saturday 07 March 2026 00:21:57 +0000 (0:00:01.526) 0:00:01.804 ******** 2026-03-07 00:23:54.088662 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-07 00:23:54.088671 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-07 00:23:54.088680 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-07 00:23:54.088689 | orchestrator | 2026-03-07 00:23:54.088697 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-07 00:23:54.088706 | orchestrator | Saturday 07 March 2026 00:21:58 +0000 (0:00:01.228) 0:00:03.032 ******** 2026-03-07 00:23:54.088714 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-07 00:23:54.088723 | orchestrator | 2026-03-07 00:23:54.088731 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-07 00:23:54.088740 | orchestrator | Saturday 07 March 2026 00:21:59 +0000 (0:00:01.122) 0:00:04.154 ******** 2026-03-07 00:23:54.088749 | orchestrator | ok: [testbed-manager] 2026-03-07 00:23:54.088757 | orchestrator | 2026-03-07 00:23:54.088766 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-07 00:23:54.088774 | orchestrator | Saturday 07 March 2026 00:22:00 +0000 (0:00:00.354) 0:00:04.508 ******** 2026-03-07 00:23:54.088783 | orchestrator | changed: [testbed-manager] 2026-03-07 00:23:54.088792 | orchestrator | 2026-03-07 00:23:54.088800 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-07 00:23:54.088809 | orchestrator | Saturday 07 March 2026 00:22:01 +0000 (0:00:00.923) 0:00:05.432 ******** 2026-03-07 00:23:54.088817 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-07 00:23:54.088827 | orchestrator | ok: [testbed-manager] 2026-03-07 00:23:54.088835 | orchestrator | 2026-03-07 00:23:54.088844 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-07 00:23:54.088853 | orchestrator | Saturday 07 March 2026 00:22:37 +0000 (0:00:35.880) 0:00:41.313 ******** 2026-03-07 00:23:54.088861 | orchestrator | changed: [testbed-manager] 2026-03-07 00:23:54.088870 | orchestrator | 2026-03-07 00:23:54.088886 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-07 00:23:54.088895 | orchestrator | Saturday 07 March 2026 00:22:53 +0000 (0:00:15.861) 0:00:57.174 ******** 2026-03-07 00:23:54.088904 | orchestrator | Pausing for 60 seconds 2026-03-07 00:23:54.088913 | orchestrator | changed: [testbed-manager] 2026-03-07 00:23:54.088922 | orchestrator | 2026-03-07 00:23:54.088930 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-07 00:23:54.088939 | orchestrator | Saturday 07 March 2026 00:23:53 +0000 (0:01:00.092) 0:01:57.267 ******** 2026-03-07 00:23:54.088950 | orchestrator | ok: [testbed-manager] 2026-03-07 00:23:54.088960 | orchestrator | 2026-03-07 00:23:54.088970 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-07 00:23:54.088995 | orchestrator | Saturday 07 March 2026 00:23:53 +0000 (0:00:00.081) 0:01:57.349 ******** 2026-03-07 00:23:54.089006 | orchestrator | changed: [testbed-manager] 2026-03-07 00:23:54.089016 | orchestrator | 2026-03-07 00:23:54.089026 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:23:54.089037 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:23:54.089047 | orchestrator | 2026-03-07 00:23:54.089058 | orchestrator | 2026-03-07 00:23:54.089067 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:23:54.089078 | orchestrator | Saturday 07 March 2026 00:23:53 +0000 (0:00:00.641) 0:01:57.991 ******** 2026-03-07 00:23:54.089089 | orchestrator | =============================================================================== 2026-03-07 00:23:54.089099 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-07 00:23:54.089109 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.88s 2026-03-07 00:23:54.089119 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.86s 2026-03-07 00:23:54.089129 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.53s 2026-03-07 00:23:54.089139 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.23s 2026-03-07 00:23:54.089149 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2026-03-07 00:23:54.089160 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2026-03-07 00:23:54.089171 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2026-03-07 00:23:54.089181 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-03-07 00:23:54.089190 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-03-07 00:23:54.089198 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-03-07 00:23:54.410392 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-07 00:23:54.410483 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-07 00:23:54.417914 | orchestrator | + set -e 2026-03-07 00:23:54.418008 | orchestrator | + NAMESPACE=kolla 2026-03-07 00:23:54.418077 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-07 00:23:54.425149 | orchestrator | ++ semver latest 9.0.0 2026-03-07 00:23:54.485057 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-07 00:23:54.485178 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-07 00:23:54.485204 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-07 00:24:06.615170 | orchestrator | 2026-03-07 00:24:06 | INFO  | Prepare task for execution of operator. 2026-03-07 00:24:06.698342 | orchestrator | 2026-03-07 00:24:06 | INFO  | Task ca4b8f2c-a0a0-4cc3-9e29-0628dc8ec0ad (operator) was prepared for execution. 2026-03-07 00:24:06.698450 | orchestrator | 2026-03-07 00:24:06 | INFO  | It takes a moment until task ca4b8f2c-a0a0-4cc3-9e29-0628dc8ec0ad (operator) has been started and output is visible here. 2026-03-07 00:24:22.681444 | orchestrator | 2026-03-07 00:24:22.681599 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-07 00:24:22.681617 | orchestrator | 2026-03-07 00:24:22.681629 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:24:22.681641 | orchestrator | Saturday 07 March 2026 00:24:11 +0000 (0:00:00.167) 0:00:00.167 ******** 2026-03-07 00:24:22.681653 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:24:22.681666 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:24:22.681677 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:24:22.681688 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:24:22.681699 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:24:22.681714 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:24:22.681725 | orchestrator | 2026-03-07 00:24:22.681736 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-07 00:24:22.681766 | orchestrator | Saturday 07 March 2026 00:24:14 +0000 (0:00:03.298) 0:00:03.466 ******** 2026-03-07 00:24:22.681778 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:24:22.681789 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:24:22.681800 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:24:22.681811 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:24:22.681822 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:24:22.681832 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:24:22.681843 | orchestrator | 2026-03-07 00:24:22.681855 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-07 00:24:22.681866 | orchestrator | 2026-03-07 00:24:22.681877 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-07 00:24:22.681888 | orchestrator | Saturday 07 March 2026 00:24:15 +0000 (0:00:00.754) 0:00:04.221 ******** 2026-03-07 00:24:22.681899 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:24:22.681910 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:24:22.681920 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:24:22.681931 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:24:22.681942 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:24:22.681953 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:24:22.681964 | orchestrator | 2026-03-07 00:24:22.681976 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-07 00:24:22.681989 | orchestrator | Saturday 07 March 2026 00:24:15 +0000 (0:00:00.193) 0:00:04.415 ******** 2026-03-07 00:24:22.682002 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:24:22.682132 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:24:22.682160 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:24:22.682179 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:24:22.682211 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:24:22.682231 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:24:22.682250 | orchestrator | 2026-03-07 00:24:22.682270 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-07 00:24:22.682289 | orchestrator | Saturday 07 March 2026 00:24:15 +0000 (0:00:00.195) 0:00:04.611 ******** 2026-03-07 00:24:22.682307 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:24:22.682325 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:24:22.682342 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:24:22.682358 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:24:22.682377 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:24:22.682396 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:24:22.682414 | orchestrator | 2026-03-07 00:24:22.682433 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-07 00:24:22.682452 | orchestrator | Saturday 07 March 2026 00:24:16 +0000 (0:00:00.581) 0:00:05.192 ******** 2026-03-07 00:24:22.682472 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:24:22.682532 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:24:22.682554 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:24:22.682574 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:24:22.682593 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:24:22.682614 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:24:22.682633 | orchestrator | 2026-03-07 00:24:22.682651 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-07 00:24:22.682670 | orchestrator | Saturday 07 March 2026 00:24:16 +0000 (0:00:00.812) 0:00:06.005 ******** 2026-03-07 00:24:22.682691 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-07 00:24:22.682712 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-07 00:24:22.682733 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-07 00:24:22.682752 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-07 00:24:22.682771 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-07 00:24:22.682782 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-07 00:24:22.682792 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-07 00:24:22.682803 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-07 00:24:22.682834 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-07 00:24:22.682852 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-07 00:24:22.682870 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-07 00:24:22.682887 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-07 00:24:22.682902 | orchestrator | 2026-03-07 00:24:22.682921 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-07 00:24:22.682940 | orchestrator | Saturday 07 March 2026 00:24:18 +0000 (0:00:01.176) 0:00:07.182 ******** 2026-03-07 00:24:22.682957 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:24:22.682976 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:24:22.682995 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:24:22.683014 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:24:22.683029 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:24:22.683039 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:24:22.683050 | orchestrator | 2026-03-07 00:24:22.683061 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-07 00:24:22.683073 | orchestrator | Saturday 07 March 2026 00:24:19 +0000 (0:00:01.211) 0:00:08.394 ******** 2026-03-07 00:24:22.683084 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:24:22.683096 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:24:22.683107 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:24:22.683117 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:24:22.683129 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:24:22.683162 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:24:22.683174 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-07 00:24:22.683185 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-07 00:24:22.683196 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-07 00:24:22.683206 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-07 00:24:22.683217 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-07 00:24:22.683228 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-07 00:24:22.683239 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:24:22.683250 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:24:22.683260 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-07 00:24:22.683271 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-07 00:24:22.683282 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-07 00:24:22.683292 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:24:22.683303 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:24:22.683314 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:24:22.683324 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:24:22.683335 | orchestrator | 2026-03-07 00:24:22.683346 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-07 00:24:22.683358 | orchestrator | Saturday 07 March 2026 00:24:20 +0000 (0:00:01.190) 0:00:09.584 ******** 2026-03-07 00:24:22.683368 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:24:22.683379 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:24:22.683390 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:24:22.683408 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:24:22.683419 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:24:22.683430 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:24:22.683440 | orchestrator | 2026-03-07 00:24:22.683451 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-07 00:24:22.683471 | orchestrator | Saturday 07 March 2026 00:24:20 +0000 (0:00:00.154) 0:00:09.739 ******** 2026-03-07 00:24:22.683482 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:24:22.683623 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:24:22.683658 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:24:22.683669 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:24:22.683680 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:24:22.683691 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:24:22.683702 | orchestrator | 2026-03-07 00:24:22.683713 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-07 00:24:22.683724 | orchestrator | Saturday 07 March 2026 00:24:20 +0000 (0:00:00.200) 0:00:09.940 ******** 2026-03-07 00:24:22.683735 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:24:22.683746 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:24:22.683757 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:24:22.683768 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:24:22.683778 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:24:22.683789 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:24:22.683800 | orchestrator | 2026-03-07 00:24:22.683810 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-07 00:24:22.683821 | orchestrator | Saturday 07 March 2026 00:24:21 +0000 (0:00:00.611) 0:00:10.551 ******** 2026-03-07 00:24:22.683832 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:24:22.683843 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:24:22.683854 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:24:22.683864 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:24:22.683875 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:24:22.683886 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:24:22.683897 | orchestrator | 2026-03-07 00:24:22.683908 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-07 00:24:22.683919 | orchestrator | Saturday 07 March 2026 00:24:21 +0000 (0:00:00.190) 0:00:10.742 ******** 2026-03-07 00:24:22.683929 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 00:24:22.683940 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:24:22.683951 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 00:24:22.683962 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 00:24:22.683973 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:24:22.683983 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:24:22.683994 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-07 00:24:22.684005 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 00:24:22.684015 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:24:22.684026 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:24:22.684037 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-07 00:24:22.684047 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:24:22.684058 | orchestrator | 2026-03-07 00:24:22.684069 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-07 00:24:22.684171 | orchestrator | Saturday 07 March 2026 00:24:22 +0000 (0:00:00.710) 0:00:11.452 ******** 2026-03-07 00:24:22.684185 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:24:22.684196 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:24:22.684206 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:24:22.684217 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:24:22.684228 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:24:22.684239 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:24:22.684249 | orchestrator | 2026-03-07 00:24:22.684260 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-07 00:24:22.684271 | orchestrator | Saturday 07 March 2026 00:24:22 +0000 (0:00:00.180) 0:00:11.632 ******** 2026-03-07 00:24:22.684282 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:24:22.684293 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:24:22.684304 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:24:22.684326 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:24:22.684353 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:24:24.042282 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:24:24.042390 | orchestrator | 2026-03-07 00:24:24.042406 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-07 00:24:24.042419 | orchestrator | Saturday 07 March 2026 00:24:22 +0000 (0:00:00.179) 0:00:11.812 ******** 2026-03-07 00:24:24.042430 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:24:24.042441 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:24:24.042452 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:24:24.042463 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:24:24.042474 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:24:24.042484 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:24:24.042539 | orchestrator | 2026-03-07 00:24:24.042551 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-07 00:24:24.042562 | orchestrator | Saturday 07 March 2026 00:24:22 +0000 (0:00:00.165) 0:00:11.978 ******** 2026-03-07 00:24:24.042573 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:24:24.042584 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:24:24.042595 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:24:24.042606 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:24:24.042616 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:24:24.042627 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:24:24.042637 | orchestrator | 2026-03-07 00:24:24.042648 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-07 00:24:24.042659 | orchestrator | Saturday 07 March 2026 00:24:23 +0000 (0:00:00.643) 0:00:12.621 ******** 2026-03-07 00:24:24.042670 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:24:24.042681 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:24:24.042691 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:24:24.042702 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:24:24.042713 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:24:24.042724 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:24:24.042734 | orchestrator | 2026-03-07 00:24:24.042745 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:24:24.042757 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:24:24.042793 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:24:24.042805 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:24:24.042819 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:24:24.042832 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:24:24.042845 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:24:24.042858 | orchestrator | 2026-03-07 00:24:24.042870 | orchestrator | 2026-03-07 00:24:24.042882 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:24:24.042895 | orchestrator | Saturday 07 March 2026 00:24:23 +0000 (0:00:00.274) 0:00:12.896 ******** 2026-03-07 00:24:24.042908 | orchestrator | =============================================================================== 2026-03-07 00:24:24.042921 | orchestrator | Gathering Facts --------------------------------------------------------- 3.30s 2026-03-07 00:24:24.042933 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.21s 2026-03-07 00:24:24.042946 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.19s 2026-03-07 00:24:24.042982 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-03-07 00:24:24.042995 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2026-03-07 00:24:24.043007 | orchestrator | Do not require tty for all users ---------------------------------------- 0.75s 2026-03-07 00:24:24.043019 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-03-07 00:24:24.043032 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2026-03-07 00:24:24.043045 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2026-03-07 00:24:24.043058 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.58s 2026-03-07 00:24:24.043071 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2026-03-07 00:24:24.043085 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-03-07 00:24:24.043097 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2026-03-07 00:24:24.043110 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2026-03-07 00:24:24.043123 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2026-03-07 00:24:24.043136 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2026-03-07 00:24:24.043149 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-03-07 00:24:24.043162 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-03-07 00:24:24.043175 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-03-07 00:24:24.367773 | orchestrator | + osism apply --environment custom facts 2026-03-07 00:24:26.395090 | orchestrator | 2026-03-07 00:24:26 | INFO  | Trying to run play facts in environment custom 2026-03-07 00:24:36.456100 | orchestrator | 2026-03-07 00:24:36 | INFO  | Prepare task for execution of facts. 2026-03-07 00:24:36.534778 | orchestrator | 2026-03-07 00:24:36 | INFO  | Task 0a838150-e74c-4615-9e84-a1d8b64252df (facts) was prepared for execution. 2026-03-07 00:24:36.534918 | orchestrator | 2026-03-07 00:24:36 | INFO  | It takes a moment until task 0a838150-e74c-4615-9e84-a1d8b64252df (facts) has been started and output is visible here. 2026-03-07 00:25:19.658533 | orchestrator | 2026-03-07 00:25:19.658652 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-07 00:25:19.658669 | orchestrator | 2026-03-07 00:25:19.658681 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-07 00:25:19.658692 | orchestrator | Saturday 07 March 2026 00:24:40 +0000 (0:00:00.063) 0:00:00.063 ******** 2026-03-07 00:25:19.658704 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:19.658717 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:25:19.658729 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:19.658740 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:25:19.658750 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:19.658761 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:25:19.658772 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:19.658783 | orchestrator | 2026-03-07 00:25:19.658794 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-07 00:25:19.658805 | orchestrator | Saturday 07 March 2026 00:24:42 +0000 (0:00:01.413) 0:00:01.477 ******** 2026-03-07 00:25:19.658816 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:19.658827 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:25:19.658838 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:19.658850 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:25:19.658861 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:19.658887 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:19.658899 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:25:19.658932 | orchestrator | 2026-03-07 00:25:19.658944 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-07 00:25:19.658954 | orchestrator | 2026-03-07 00:25:19.658965 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-07 00:25:19.658976 | orchestrator | Saturday 07 March 2026 00:24:43 +0000 (0:00:01.213) 0:00:02.690 ******** 2026-03-07 00:25:19.658989 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:19.659002 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:19.659014 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:19.659026 | orchestrator | 2026-03-07 00:25:19.659040 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-07 00:25:19.659054 | orchestrator | Saturday 07 March 2026 00:24:43 +0000 (0:00:00.110) 0:00:02.800 ******** 2026-03-07 00:25:19.659066 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:19.659078 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:19.659090 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:19.659102 | orchestrator | 2026-03-07 00:25:19.659115 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-07 00:25:19.659127 | orchestrator | Saturday 07 March 2026 00:24:43 +0000 (0:00:00.246) 0:00:03.047 ******** 2026-03-07 00:25:19.659140 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:19.659152 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:19.659164 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:19.659177 | orchestrator | 2026-03-07 00:25:19.659189 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-07 00:25:19.659202 | orchestrator | Saturday 07 March 2026 00:24:44 +0000 (0:00:00.240) 0:00:03.288 ******** 2026-03-07 00:25:19.659215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:25:19.659229 | orchestrator | 2026-03-07 00:25:19.659242 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-07 00:25:19.659255 | orchestrator | Saturday 07 March 2026 00:24:44 +0000 (0:00:00.158) 0:00:03.447 ******** 2026-03-07 00:25:19.659267 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:19.659279 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:19.659291 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:19.659304 | orchestrator | 2026-03-07 00:25:19.659318 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-07 00:25:19.659330 | orchestrator | Saturday 07 March 2026 00:24:44 +0000 (0:00:00.442) 0:00:03.890 ******** 2026-03-07 00:25:19.659343 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:19.659354 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:19.659364 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:25:19.659375 | orchestrator | 2026-03-07 00:25:19.659386 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-07 00:25:19.659416 | orchestrator | Saturday 07 March 2026 00:24:44 +0000 (0:00:00.133) 0:00:04.023 ******** 2026-03-07 00:25:19.659427 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:19.659438 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:19.659449 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:19.659459 | orchestrator | 2026-03-07 00:25:19.659470 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-07 00:25:19.659481 | orchestrator | Saturday 07 March 2026 00:24:45 +0000 (0:00:01.031) 0:00:05.055 ******** 2026-03-07 00:25:19.659491 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:19.659502 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:19.659513 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:19.659524 | orchestrator | 2026-03-07 00:25:19.659535 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-07 00:25:19.659546 | orchestrator | Saturday 07 March 2026 00:24:46 +0000 (0:00:00.457) 0:00:05.512 ******** 2026-03-07 00:25:19.659556 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:19.659567 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:19.659592 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:19.659611 | orchestrator | 2026-03-07 00:25:19.659622 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-07 00:25:19.659633 | orchestrator | Saturday 07 March 2026 00:24:47 +0000 (0:00:01.009) 0:00:06.522 ******** 2026-03-07 00:25:19.659643 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:19.659654 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:19.659665 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:19.659676 | orchestrator | 2026-03-07 00:25:19.659686 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-07 00:25:19.659697 | orchestrator | Saturday 07 March 2026 00:25:02 +0000 (0:00:15.054) 0:00:21.576 ******** 2026-03-07 00:25:19.659708 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:19.659719 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:19.659730 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:25:19.659740 | orchestrator | 2026-03-07 00:25:19.659751 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-07 00:25:19.659778 | orchestrator | Saturday 07 March 2026 00:25:02 +0000 (0:00:00.118) 0:00:21.695 ******** 2026-03-07 00:25:19.659790 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:19.659801 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:19.659812 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:19.659823 | orchestrator | 2026-03-07 00:25:19.659834 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-07 00:25:19.659845 | orchestrator | Saturday 07 March 2026 00:25:09 +0000 (0:00:07.498) 0:00:29.193 ******** 2026-03-07 00:25:19.659855 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:19.659866 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:19.659877 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:19.659888 | orchestrator | 2026-03-07 00:25:19.659899 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-07 00:25:19.659910 | orchestrator | Saturday 07 March 2026 00:25:10 +0000 (0:00:00.476) 0:00:29.669 ******** 2026-03-07 00:25:19.659921 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-07 00:25:19.659933 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-07 00:25:19.659943 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-07 00:25:19.659955 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-07 00:25:19.659965 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-07 00:25:19.659976 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-07 00:25:19.659987 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-07 00:25:19.659998 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-07 00:25:19.660009 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-07 00:25:19.660019 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-07 00:25:19.660030 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-07 00:25:19.660041 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-07 00:25:19.660052 | orchestrator | 2026-03-07 00:25:19.660063 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-07 00:25:19.660074 | orchestrator | Saturday 07 March 2026 00:25:13 +0000 (0:00:03.401) 0:00:33.071 ******** 2026-03-07 00:25:19.660085 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:19.660095 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:19.660106 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:19.660117 | orchestrator | 2026-03-07 00:25:19.660128 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-07 00:25:19.660139 | orchestrator | 2026-03-07 00:25:19.660150 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:25:19.660161 | orchestrator | Saturday 07 March 2026 00:25:15 +0000 (0:00:01.297) 0:00:34.368 ******** 2026-03-07 00:25:19.660178 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:25:19.660189 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:25:19.660200 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:25:19.660210 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:19.660221 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:19.660271 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:19.660283 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:19.660294 | orchestrator | 2026-03-07 00:25:19.660305 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:25:19.660317 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:25:19.660328 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:25:19.660341 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:25:19.660352 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:25:19.660363 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:25:19.660374 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:25:19.660385 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:25:19.660416 | orchestrator | 2026-03-07 00:25:19.660428 | orchestrator | 2026-03-07 00:25:19.660439 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:25:19.660450 | orchestrator | Saturday 07 March 2026 00:25:19 +0000 (0:00:04.556) 0:00:38.925 ******** 2026-03-07 00:25:19.660461 | orchestrator | =============================================================================== 2026-03-07 00:25:19.660472 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.05s 2026-03-07 00:25:19.660483 | orchestrator | Install required packages (Debian) -------------------------------------- 7.50s 2026-03-07 00:25:19.660493 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.56s 2026-03-07 00:25:19.660504 | orchestrator | Copy fact files --------------------------------------------------------- 3.40s 2026-03-07 00:25:19.660515 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-03-07 00:25:19.660526 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.30s 2026-03-07 00:25:19.660543 | orchestrator | Copy fact file ---------------------------------------------------------- 1.21s 2026-03-07 00:25:19.911979 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-03-07 00:25:19.912119 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.01s 2026-03-07 00:25:19.912145 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-03-07 00:25:19.912164 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-03-07 00:25:19.912182 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-03-07 00:25:19.912200 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.25s 2026-03-07 00:25:19.912219 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-03-07 00:25:19.912236 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-03-07 00:25:19.912254 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-03-07 00:25:19.912296 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-03-07 00:25:19.912345 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-03-07 00:25:20.272626 | orchestrator | + osism apply bootstrap 2026-03-07 00:25:32.376049 | orchestrator | 2026-03-07 00:25:32 | INFO  | Prepare task for execution of bootstrap. 2026-03-07 00:25:32.464150 | orchestrator | 2026-03-07 00:25:32 | INFO  | Task 1b44098c-79fc-49c9-83a4-166b01b011d7 (bootstrap) was prepared for execution. 2026-03-07 00:25:32.464268 | orchestrator | 2026-03-07 00:25:32 | INFO  | It takes a moment until task 1b44098c-79fc-49c9-83a4-166b01b011d7 (bootstrap) has been started and output is visible here. 2026-03-07 00:25:48.903856 | orchestrator | 2026-03-07 00:25:48.904003 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-07 00:25:48.904032 | orchestrator | 2026-03-07 00:25:48.904053 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-07 00:25:48.904072 | orchestrator | Saturday 07 March 2026 00:25:36 +0000 (0:00:00.148) 0:00:00.148 ******** 2026-03-07 00:25:48.904093 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:48.904115 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:48.904134 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:48.904152 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:48.904172 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:25:48.904192 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:25:48.904212 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:25:48.904233 | orchestrator | 2026-03-07 00:25:48.904252 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-07 00:25:48.904273 | orchestrator | 2026-03-07 00:25:48.904293 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:25:48.904313 | orchestrator | Saturday 07 March 2026 00:25:37 +0000 (0:00:00.252) 0:00:00.401 ******** 2026-03-07 00:25:48.904334 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:25:48.904442 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:25:48.904468 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:25:48.904489 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:48.904511 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:48.904532 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:48.904554 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:48.904577 | orchestrator | 2026-03-07 00:25:48.904599 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-07 00:25:48.904622 | orchestrator | 2026-03-07 00:25:48.904643 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:25:48.904663 | orchestrator | Saturday 07 March 2026 00:25:40 +0000 (0:00:03.590) 0:00:03.991 ******** 2026-03-07 00:25:48.904683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:25:48.904703 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-07 00:25:48.904722 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-07 00:25:48.904740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:25:48.904757 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-07 00:25:48.904775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:25:48.904793 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-07 00:25:48.904812 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-07 00:25:48.904830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-07 00:25:48.904848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-07 00:25:48.904864 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-07 00:25:48.904883 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-07 00:25:48.904900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-07 00:25:48.904918 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-07 00:25:48.904937 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-07 00:25:48.904993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-07 00:25:48.905018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-07 00:25:48.905037 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-07 00:25:48.905054 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:48.905073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-07 00:25:48.905092 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:48.905111 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-07 00:25:48.905128 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-07 00:25:48.905146 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-07 00:25:48.905163 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-07 00:25:48.905181 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-07 00:25:48.905201 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-07 00:25:48.905219 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-07 00:25:48.905238 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-07 00:25:48.905251 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-07 00:25:48.905261 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-07 00:25:48.905272 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:25:48.905283 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-07 00:25:48.905293 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-07 00:25:48.905304 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-07 00:25:48.905315 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-07 00:25:48.905326 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:25:48.905337 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-07 00:25:48.905380 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-07 00:25:48.905393 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-07 00:25:48.905404 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:25:48.905415 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-07 00:25:48.905426 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:25:48.905436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:25:48.905447 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:25:48.905458 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-07 00:25:48.905469 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-07 00:25:48.905503 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-07 00:25:48.905515 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-07 00:25:48.905526 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-07 00:25:48.905536 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-07 00:25:48.905547 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-07 00:25:48.905558 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:25:48.905569 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-07 00:25:48.905580 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-07 00:25:48.905590 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:25:48.905601 | orchestrator | 2026-03-07 00:25:48.905612 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-07 00:25:48.905623 | orchestrator | 2026-03-07 00:25:48.905634 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-07 00:25:48.905645 | orchestrator | Saturday 07 March 2026 00:25:41 +0000 (0:00:00.574) 0:00:04.566 ******** 2026-03-07 00:25:48.905656 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:25:48.905680 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:48.905691 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:25:48.905702 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:48.905713 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:48.905723 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:48.905734 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:25:48.905745 | orchestrator | 2026-03-07 00:25:48.905756 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-07 00:25:48.905766 | orchestrator | Saturday 07 March 2026 00:25:42 +0000 (0:00:01.239) 0:00:05.805 ******** 2026-03-07 00:25:48.905777 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:48.905789 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:25:48.905800 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:25:48.905810 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:48.905821 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:48.905832 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:48.905842 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:25:48.905853 | orchestrator | 2026-03-07 00:25:48.905864 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-07 00:25:48.905875 | orchestrator | Saturday 07 March 2026 00:25:43 +0000 (0:00:01.385) 0:00:07.190 ******** 2026-03-07 00:25:48.905887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:25:48.905900 | orchestrator | 2026-03-07 00:25:48.905911 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-07 00:25:48.905922 | orchestrator | Saturday 07 March 2026 00:25:44 +0000 (0:00:00.318) 0:00:07.509 ******** 2026-03-07 00:25:48.905933 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:48.905944 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:48.905955 | orchestrator | changed: [testbed-manager] 2026-03-07 00:25:48.905966 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:48.905977 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:25:48.905988 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:25:48.905998 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:25:48.906009 | orchestrator | 2026-03-07 00:25:48.906093 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-07 00:25:48.906105 | orchestrator | Saturday 07 March 2026 00:25:46 +0000 (0:00:02.065) 0:00:09.575 ******** 2026-03-07 00:25:48.906116 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:25:48.906129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:25:48.906142 | orchestrator | 2026-03-07 00:25:48.906153 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-07 00:25:48.906164 | orchestrator | Saturday 07 March 2026 00:25:46 +0000 (0:00:00.296) 0:00:09.872 ******** 2026-03-07 00:25:48.906175 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:48.906186 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:48.906197 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:48.906208 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:25:48.906218 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:25:48.906249 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:25:48.906260 | orchestrator | 2026-03-07 00:25:48.906271 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-07 00:25:48.906282 | orchestrator | Saturday 07 March 2026 00:25:47 +0000 (0:00:01.029) 0:00:10.902 ******** 2026-03-07 00:25:48.906293 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:25:48.906304 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:25:48.906314 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:48.906325 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:48.906336 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:48.906405 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:25:48.906428 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:25:48.906438 | orchestrator | 2026-03-07 00:25:48.906449 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-07 00:25:48.906465 | orchestrator | Saturday 07 March 2026 00:25:48 +0000 (0:00:00.559) 0:00:11.462 ******** 2026-03-07 00:25:48.906476 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:48.906487 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:48.906497 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:25:48.906508 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:25:48.906518 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:25:48.906529 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:25:48.906540 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:48.906551 | orchestrator | 2026-03-07 00:25:48.906562 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-07 00:25:48.906573 | orchestrator | Saturday 07 March 2026 00:25:48 +0000 (0:00:00.541) 0:00:12.003 ******** 2026-03-07 00:25:48.906584 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:48.906595 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:48.906616 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:26:01.064460 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:26:01.064578 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:26:01.064592 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:26:01.064603 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:26:01.064613 | orchestrator | 2026-03-07 00:26:01.064625 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-07 00:26:01.064636 | orchestrator | Saturday 07 March 2026 00:25:49 +0000 (0:00:00.256) 0:00:12.260 ******** 2026-03-07 00:26:01.064648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:26:01.064675 | orchestrator | 2026-03-07 00:26:01.064685 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-07 00:26:01.064696 | orchestrator | Saturday 07 March 2026 00:25:49 +0000 (0:00:00.344) 0:00:12.604 ******** 2026-03-07 00:26:01.064706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:26:01.064716 | orchestrator | 2026-03-07 00:26:01.064726 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-07 00:26:01.064736 | orchestrator | Saturday 07 March 2026 00:25:49 +0000 (0:00:00.446) 0:00:13.050 ******** 2026-03-07 00:26:01.064746 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:01.064756 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:01.064766 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:01.064776 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:01.064785 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.064795 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:01.064805 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:01.064814 | orchestrator | 2026-03-07 00:26:01.064824 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-07 00:26:01.064835 | orchestrator | Saturday 07 March 2026 00:25:51 +0000 (0:00:01.276) 0:00:14.327 ******** 2026-03-07 00:26:01.064845 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:26:01.064855 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:26:01.064864 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:26:01.064874 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:26:01.064884 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:26:01.064894 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:26:01.064903 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:26:01.064913 | orchestrator | 2026-03-07 00:26:01.064923 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-07 00:26:01.064954 | orchestrator | Saturday 07 March 2026 00:25:51 +0000 (0:00:00.240) 0:00:14.568 ******** 2026-03-07 00:26:01.064965 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:01.064974 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:01.064984 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:01.064993 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.065003 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:01.065012 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:01.065022 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:01.065031 | orchestrator | 2026-03-07 00:26:01.065041 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-07 00:26:01.065051 | orchestrator | Saturday 07 March 2026 00:25:51 +0000 (0:00:00.574) 0:00:15.143 ******** 2026-03-07 00:26:01.065061 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:26:01.065070 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:26:01.065080 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:26:01.065090 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:26:01.065099 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:26:01.065109 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:26:01.065118 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:26:01.065128 | orchestrator | 2026-03-07 00:26:01.065138 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-07 00:26:01.065149 | orchestrator | Saturday 07 March 2026 00:25:52 +0000 (0:00:00.234) 0:00:15.377 ******** 2026-03-07 00:26:01.065159 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:01.065168 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:01.065178 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:01.065187 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.065197 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:01.065207 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:01.065216 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:01.065226 | orchestrator | 2026-03-07 00:26:01.065236 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-07 00:26:01.065245 | orchestrator | Saturday 07 March 2026 00:25:52 +0000 (0:00:00.551) 0:00:15.929 ******** 2026-03-07 00:26:01.065255 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.065265 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:01.065275 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:01.065284 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:01.065294 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:01.065303 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:01.065313 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:01.065322 | orchestrator | 2026-03-07 00:26:01.065368 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-07 00:26:01.065379 | orchestrator | Saturday 07 March 2026 00:25:53 +0000 (0:00:01.093) 0:00:17.023 ******** 2026-03-07 00:26:01.065389 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:01.065399 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:01.065409 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:01.065418 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:01.065428 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:01.065437 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:01.065447 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.065457 | orchestrator | 2026-03-07 00:26:01.065467 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-07 00:26:01.065477 | orchestrator | Saturday 07 March 2026 00:25:54 +0000 (0:00:01.031) 0:00:18.055 ******** 2026-03-07 00:26:01.065504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:26:01.065515 | orchestrator | 2026-03-07 00:26:01.065525 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-07 00:26:01.065542 | orchestrator | Saturday 07 March 2026 00:25:55 +0000 (0:00:00.353) 0:00:18.409 ******** 2026-03-07 00:26:01.065552 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:26:01.065562 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:01.065572 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:01.065582 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:01.065591 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:01.065601 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:01.065610 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:01.065620 | orchestrator | 2026-03-07 00:26:01.065630 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-07 00:26:01.065640 | orchestrator | Saturday 07 March 2026 00:25:56 +0000 (0:00:01.253) 0:00:19.663 ******** 2026-03-07 00:26:01.065649 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:01.065659 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:01.065669 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:01.065679 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.065688 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:01.065698 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:01.065707 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:01.065717 | orchestrator | 2026-03-07 00:26:01.065727 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-07 00:26:01.065736 | orchestrator | Saturday 07 March 2026 00:25:56 +0000 (0:00:00.237) 0:00:19.900 ******** 2026-03-07 00:26:01.065746 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:01.065756 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:01.065765 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:01.065775 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.065785 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:01.065794 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:01.065804 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:01.065813 | orchestrator | 2026-03-07 00:26:01.065823 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-07 00:26:01.065833 | orchestrator | Saturday 07 March 2026 00:25:56 +0000 (0:00:00.259) 0:00:20.160 ******** 2026-03-07 00:26:01.065842 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:01.065852 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:01.065862 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:01.065871 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.065881 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:01.065890 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:01.065900 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:01.065910 | orchestrator | 2026-03-07 00:26:01.065919 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-07 00:26:01.065929 | orchestrator | Saturday 07 March 2026 00:25:57 +0000 (0:00:00.282) 0:00:20.442 ******** 2026-03-07 00:26:01.065940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:26:01.065952 | orchestrator | 2026-03-07 00:26:01.065962 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-07 00:26:01.065972 | orchestrator | Saturday 07 March 2026 00:25:57 +0000 (0:00:00.353) 0:00:20.796 ******** 2026-03-07 00:26:01.065981 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:01.065991 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:01.066001 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:01.066010 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.066073 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:01.066112 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:01.066123 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:01.066133 | orchestrator | 2026-03-07 00:26:01.066143 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-07 00:26:01.066153 | orchestrator | Saturday 07 March 2026 00:25:58 +0000 (0:00:00.593) 0:00:21.390 ******** 2026-03-07 00:26:01.066163 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:26:01.066179 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:26:01.066189 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:26:01.066199 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:26:01.066209 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:26:01.066218 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:26:01.066228 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:26:01.066238 | orchestrator | 2026-03-07 00:26:01.066248 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-07 00:26:01.066257 | orchestrator | Saturday 07 March 2026 00:25:58 +0000 (0:00:00.236) 0:00:21.627 ******** 2026-03-07 00:26:01.066267 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:01.066277 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:01.066286 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:01.066296 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.066306 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:01.066316 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:01.066346 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:01.066364 | orchestrator | 2026-03-07 00:26:01.066380 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-07 00:26:01.066398 | orchestrator | Saturday 07 March 2026 00:25:59 +0000 (0:00:01.078) 0:00:22.706 ******** 2026-03-07 00:26:01.066415 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:01.066431 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:01.066455 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:01.066472 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.066486 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:01.066501 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:01.066515 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:01.066530 | orchestrator | 2026-03-07 00:26:01.066545 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-07 00:26:01.066560 | orchestrator | Saturday 07 March 2026 00:26:00 +0000 (0:00:00.578) 0:00:23.285 ******** 2026-03-07 00:26:01.066577 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:01.066592 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:01.066609 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:01.066623 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:01.066644 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:41.415016 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:41.415163 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:41.415181 | orchestrator | 2026-03-07 00:26:41.415194 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-07 00:26:41.415207 | orchestrator | Saturday 07 March 2026 00:26:01 +0000 (0:00:01.154) 0:00:24.439 ******** 2026-03-07 00:26:41.415219 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:41.415231 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:41.415242 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:41.415252 | orchestrator | changed: [testbed-manager] 2026-03-07 00:26:41.415337 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:41.415350 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:41.415361 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:41.415372 | orchestrator | 2026-03-07 00:26:41.415383 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-07 00:26:41.415395 | orchestrator | Saturday 07 March 2026 00:26:15 +0000 (0:00:14.743) 0:00:39.183 ******** 2026-03-07 00:26:41.415410 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:41.415436 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:41.415462 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:41.415481 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:41.415501 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:41.415524 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:41.415544 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:41.415565 | orchestrator | 2026-03-07 00:26:41.415583 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-07 00:26:41.415600 | orchestrator | Saturday 07 March 2026 00:26:16 +0000 (0:00:00.235) 0:00:39.419 ******** 2026-03-07 00:26:41.415661 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:41.415686 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:41.415705 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:41.415723 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:41.415740 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:41.415758 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:41.415777 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:41.415795 | orchestrator | 2026-03-07 00:26:41.415812 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-07 00:26:41.415830 | orchestrator | Saturday 07 March 2026 00:26:16 +0000 (0:00:00.267) 0:00:39.687 ******** 2026-03-07 00:26:41.415848 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:41.415866 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:41.415885 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:41.415904 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:41.415923 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:41.415941 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:41.415960 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:41.415978 | orchestrator | 2026-03-07 00:26:41.415998 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-07 00:26:41.416017 | orchestrator | Saturday 07 March 2026 00:26:16 +0000 (0:00:00.253) 0:00:39.941 ******** 2026-03-07 00:26:41.416039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:26:41.416064 | orchestrator | 2026-03-07 00:26:41.416085 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-07 00:26:41.416106 | orchestrator | Saturday 07 March 2026 00:26:16 +0000 (0:00:00.295) 0:00:40.236 ******** 2026-03-07 00:26:41.416126 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:41.416147 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:41.416167 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:41.416187 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:41.416233 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:41.416256 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:41.416305 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:41.416324 | orchestrator | 2026-03-07 00:26:41.416344 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-07 00:26:41.416363 | orchestrator | Saturday 07 March 2026 00:26:18 +0000 (0:00:01.621) 0:00:41.858 ******** 2026-03-07 00:26:41.416380 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:41.416398 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:41.416416 | orchestrator | changed: [testbed-manager] 2026-03-07 00:26:41.416435 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:41.416453 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:41.416471 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:41.416488 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:41.416506 | orchestrator | 2026-03-07 00:26:41.416525 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-07 00:26:41.416542 | orchestrator | Saturday 07 March 2026 00:26:19 +0000 (0:00:01.066) 0:00:42.925 ******** 2026-03-07 00:26:41.416558 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:41.416574 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:41.416590 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:41.416606 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:41.416622 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:41.416639 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:41.416655 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:41.416673 | orchestrator | 2026-03-07 00:26:41.416693 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-07 00:26:41.416711 | orchestrator | Saturday 07 March 2026 00:26:20 +0000 (0:00:00.797) 0:00:43.723 ******** 2026-03-07 00:26:41.416742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:26:41.416780 | orchestrator | 2026-03-07 00:26:41.416797 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-07 00:26:41.416817 | orchestrator | Saturday 07 March 2026 00:26:20 +0000 (0:00:00.302) 0:00:44.025 ******** 2026-03-07 00:26:41.416835 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:41.416854 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:41.416872 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:41.416891 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:41.416910 | orchestrator | changed: [testbed-manager] 2026-03-07 00:26:41.416927 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:41.416946 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:41.416964 | orchestrator | 2026-03-07 00:26:41.417046 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-07 00:26:41.417066 | orchestrator | Saturday 07 March 2026 00:26:21 +0000 (0:00:01.057) 0:00:45.082 ******** 2026-03-07 00:26:41.417084 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:26:41.417102 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:26:41.417120 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:26:41.417137 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:26:41.417155 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:26:41.417173 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:26:41.417192 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:26:41.417209 | orchestrator | 2026-03-07 00:26:41.417227 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-07 00:26:41.417246 | orchestrator | Saturday 07 March 2026 00:26:22 +0000 (0:00:00.257) 0:00:45.340 ******** 2026-03-07 00:26:41.417290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:26:41.417312 | orchestrator | 2026-03-07 00:26:41.417333 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-07 00:26:41.417353 | orchestrator | Saturday 07 March 2026 00:26:22 +0000 (0:00:00.376) 0:00:45.717 ******** 2026-03-07 00:26:41.417372 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:41.417393 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:41.417411 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:41.417430 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:41.417448 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:41.417466 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:41.417486 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:41.417505 | orchestrator | 2026-03-07 00:26:41.417523 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-07 00:26:41.417542 | orchestrator | Saturday 07 March 2026 00:26:24 +0000 (0:00:01.576) 0:00:47.293 ******** 2026-03-07 00:26:41.417559 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:41.417578 | orchestrator | changed: [testbed-manager] 2026-03-07 00:26:41.417597 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:41.417615 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:41.417633 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:41.417651 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:41.417669 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:41.417687 | orchestrator | 2026-03-07 00:26:41.417705 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-07 00:26:41.417724 | orchestrator | Saturday 07 March 2026 00:26:25 +0000 (0:00:01.187) 0:00:48.481 ******** 2026-03-07 00:26:41.417742 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:41.417761 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:41.417780 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:41.417797 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:41.417816 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:41.417834 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:41.417871 | orchestrator | changed: [testbed-manager] 2026-03-07 00:26:41.417889 | orchestrator | 2026-03-07 00:26:41.417907 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-07 00:26:41.417927 | orchestrator | Saturday 07 March 2026 00:26:38 +0000 (0:00:12.967) 0:01:01.448 ******** 2026-03-07 00:26:41.417945 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:41.417963 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:41.417981 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:41.417999 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:41.418104 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:41.418128 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:41.418146 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:41.418163 | orchestrator | 2026-03-07 00:26:41.418181 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-07 00:26:41.418200 | orchestrator | Saturday 07 March 2026 00:26:39 +0000 (0:00:01.461) 0:01:02.910 ******** 2026-03-07 00:26:41.418217 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:41.418235 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:41.418253 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:41.418309 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:41.418328 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:41.418346 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:41.418364 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:41.418382 | orchestrator | 2026-03-07 00:26:41.418401 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-07 00:26:41.418419 | orchestrator | Saturday 07 March 2026 00:26:40 +0000 (0:00:00.894) 0:01:03.804 ******** 2026-03-07 00:26:41.418437 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:41.418455 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:41.418473 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:41.418491 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:41.418510 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:41.418527 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:41.418546 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:41.418564 | orchestrator | 2026-03-07 00:26:41.418583 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-07 00:26:41.418601 | orchestrator | Saturday 07 March 2026 00:26:40 +0000 (0:00:00.267) 0:01:04.071 ******** 2026-03-07 00:26:41.418620 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:41.418638 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:41.418656 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:41.418684 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:41.418703 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:41.418720 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:41.418738 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:41.418757 | orchestrator | 2026-03-07 00:26:41.418775 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-07 00:26:41.418793 | orchestrator | Saturday 07 March 2026 00:26:41 +0000 (0:00:00.250) 0:01:04.321 ******** 2026-03-07 00:26:41.418814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:26:41.418834 | orchestrator | 2026-03-07 00:26:41.418869 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-07 00:29:09.060559 | orchestrator | Saturday 07 March 2026 00:26:41 +0000 (0:00:00.328) 0:01:04.650 ******** 2026-03-07 00:29:09.060689 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:09.060707 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:09.060719 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:09.060730 | orchestrator | ok: [testbed-manager] 2026-03-07 00:29:09.060741 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:09.060752 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:09.060762 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:09.060773 | orchestrator | 2026-03-07 00:29:09.060785 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-07 00:29:09.060820 | orchestrator | Saturday 07 March 2026 00:26:42 +0000 (0:00:01.588) 0:01:06.239 ******** 2026-03-07 00:29:09.060832 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:29:09.060844 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:29:09.060855 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:29:09.060866 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:29:09.060877 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:29:09.060888 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:29:09.060899 | orchestrator | changed: [testbed-manager] 2026-03-07 00:29:09.060909 | orchestrator | 2026-03-07 00:29:09.060921 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-07 00:29:09.060932 | orchestrator | Saturday 07 March 2026 00:26:43 +0000 (0:00:00.519) 0:01:06.758 ******** 2026-03-07 00:29:09.060943 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:09.060954 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:09.060965 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:09.060975 | orchestrator | ok: [testbed-manager] 2026-03-07 00:29:09.060986 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:09.060996 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:09.061007 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:09.061018 | orchestrator | 2026-03-07 00:29:09.061029 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-07 00:29:09.061096 | orchestrator | Saturday 07 March 2026 00:26:43 +0000 (0:00:00.267) 0:01:07.026 ******** 2026-03-07 00:29:09.061110 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:09.061123 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:09.061136 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:09.061149 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:09.061162 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:09.061175 | orchestrator | ok: [testbed-manager] 2026-03-07 00:29:09.061188 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:09.061200 | orchestrator | 2026-03-07 00:29:09.061213 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-07 00:29:09.061227 | orchestrator | Saturday 07 March 2026 00:26:45 +0000 (0:00:01.216) 0:01:08.243 ******** 2026-03-07 00:29:09.061240 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:29:09.061254 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:29:09.061266 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:29:09.061279 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:29:09.061292 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:29:09.061306 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:29:09.061332 | orchestrator | changed: [testbed-manager] 2026-03-07 00:29:09.061357 | orchestrator | 2026-03-07 00:29:09.061370 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-07 00:29:09.061383 | orchestrator | Saturday 07 March 2026 00:26:46 +0000 (0:00:01.578) 0:01:09.821 ******** 2026-03-07 00:29:09.061396 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:09.061410 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:09.061423 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:09.061435 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:09.061445 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:09.061456 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:09.061467 | orchestrator | ok: [testbed-manager] 2026-03-07 00:29:09.061477 | orchestrator | 2026-03-07 00:29:09.061488 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-07 00:29:09.061499 | orchestrator | Saturday 07 March 2026 00:26:48 +0000 (0:00:02.298) 0:01:12.119 ******** 2026-03-07 00:29:09.061510 | orchestrator | ok: [testbed-manager] 2026-03-07 00:29:09.061521 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:09.061531 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:09.061542 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:09.061552 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:09.061563 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:09.061574 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:09.061594 | orchestrator | 2026-03-07 00:29:09.061606 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-07 00:29:09.061626 | orchestrator | Saturday 07 March 2026 00:27:28 +0000 (0:00:39.979) 0:01:52.099 ******** 2026-03-07 00:29:09.061645 | orchestrator | changed: [testbed-manager] 2026-03-07 00:29:09.061662 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:29:09.061681 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:29:09.061700 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:29:09.061719 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:29:09.061739 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:29:09.061759 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:29:09.061779 | orchestrator | 2026-03-07 00:29:09.061796 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-07 00:29:09.061807 | orchestrator | Saturday 07 March 2026 00:28:52 +0000 (0:01:23.926) 0:03:16.025 ******** 2026-03-07 00:29:09.061818 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:09.061829 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:09.061839 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:09.061850 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:09.061861 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:09.061873 | orchestrator | ok: [testbed-manager] 2026-03-07 00:29:09.061883 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:09.061894 | orchestrator | 2026-03-07 00:29:09.061905 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-07 00:29:09.061916 | orchestrator | Saturday 07 March 2026 00:28:54 +0000 (0:00:01.732) 0:03:17.758 ******** 2026-03-07 00:29:09.061927 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:09.061938 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:09.061948 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:09.061959 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:09.061969 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:09.061980 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:09.061991 | orchestrator | changed: [testbed-manager] 2026-03-07 00:29:09.062002 | orchestrator | 2026-03-07 00:29:09.062012 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-07 00:29:09.062118 | orchestrator | Saturday 07 March 2026 00:29:07 +0000 (0:00:13.328) 0:03:31.086 ******** 2026-03-07 00:29:09.062167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-07 00:29:09.062193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-07 00:29:09.062209 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-07 00:29:09.062222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-07 00:29:09.062245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-07 00:29:09.062261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-07 00:29:09.062272 | orchestrator | 2026-03-07 00:29:09.062284 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-07 00:29:09.062295 | orchestrator | Saturday 07 March 2026 00:29:08 +0000 (0:00:00.411) 0:03:31.498 ******** 2026-03-07 00:29:09.062306 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-07 00:29:09.062316 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:29:09.062328 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-07 00:29:09.062338 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-07 00:29:09.062349 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:29:09.062360 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:29:09.062371 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-07 00:29:09.062382 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:29:09.062393 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:29:09.062414 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:29:09.062425 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:29:09.062462 | orchestrator | 2026-03-07 00:29:09.062473 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-07 00:29:09.062490 | orchestrator | Saturday 07 March 2026 00:29:08 +0000 (0:00:00.712) 0:03:32.210 ******** 2026-03-07 00:29:09.062501 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-07 00:29:09.062513 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-07 00:29:09.062524 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-07 00:29:09.062535 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-07 00:29:09.062546 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-07 00:29:09.062564 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-07 00:29:15.723433 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-07 00:29:15.723539 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-07 00:29:15.723551 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-07 00:29:15.723558 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-07 00:29:15.723564 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-07 00:29:15.723571 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-07 00:29:15.723578 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-07 00:29:15.723604 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-07 00:29:15.723611 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-07 00:29:15.723618 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-07 00:29:15.723626 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-07 00:29:15.723632 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-07 00:29:15.723639 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-07 00:29:15.723645 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-07 00:29:15.723652 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:29:15.723659 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-07 00:29:15.723665 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-07 00:29:15.723673 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-07 00:29:15.723679 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-07 00:29:15.723686 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-07 00:29:15.723692 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-07 00:29:15.723699 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-07 00:29:15.723705 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-07 00:29:15.723711 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-07 00:29:15.723717 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-07 00:29:15.723724 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-07 00:29:15.723730 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-07 00:29:15.723737 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-07 00:29:15.723742 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-07 00:29:15.723749 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-07 00:29:15.723756 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-07 00:29:15.723762 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-07 00:29:15.723769 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-07 00:29:15.723775 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-07 00:29:15.723782 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-07 00:29:15.723802 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:29:15.723808 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:29:15.723815 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:29:15.723821 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-07 00:29:15.723827 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-07 00:29:15.723835 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-07 00:29:15.723848 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-07 00:29:15.723855 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-07 00:29:15.723878 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-07 00:29:15.723885 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-07 00:29:15.723891 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-07 00:29:15.723898 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-07 00:29:15.723904 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-07 00:29:15.723911 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-07 00:29:15.723918 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-07 00:29:15.723925 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-07 00:29:15.723932 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-07 00:29:15.723938 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-07 00:29:15.723945 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-07 00:29:15.723951 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-07 00:29:15.723958 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-07 00:29:15.723964 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-07 00:29:15.723970 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-07 00:29:15.723977 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-07 00:29:15.723983 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-07 00:29:15.723990 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-07 00:29:15.723996 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-07 00:29:15.724003 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-07 00:29:15.724009 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-07 00:29:15.724015 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-07 00:29:15.724022 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-07 00:29:15.724053 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-07 00:29:15.724060 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-07 00:29:15.724067 | orchestrator | 2026-03-07 00:29:15.724074 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-07 00:29:15.724080 | orchestrator | Saturday 07 March 2026 00:29:14 +0000 (0:00:05.726) 0:03:37.937 ******** 2026-03-07 00:29:15.724087 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:29:15.724094 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:29:15.724100 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:29:15.724107 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:29:15.724119 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:29:15.724125 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:29:15.724132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:29:15.724138 | orchestrator | 2026-03-07 00:29:15.724145 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-07 00:29:15.724151 | orchestrator | Saturday 07 March 2026 00:29:15 +0000 (0:00:00.581) 0:03:38.519 ******** 2026-03-07 00:29:15.724158 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:29:15.724169 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:29:15.724176 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:29:15.724183 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:29:15.724189 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:29:15.724195 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:29:15.724201 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:29:15.724207 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:29:15.724213 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:29:15.724219 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:29:15.724237 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:29:31.117625 | orchestrator | 2026-03-07 00:29:31.117726 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-07 00:29:31.117737 | orchestrator | Saturday 07 March 2026 00:29:15 +0000 (0:00:00.469) 0:03:38.988 ******** 2026-03-07 00:29:31.117743 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:29:31.117750 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:29:31.117756 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:29:31.117763 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:29:31.117775 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:29:31.117781 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:29:31.117786 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:29:31.117792 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:29:31.117797 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:29:31.117803 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:29:31.117808 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:29:31.117814 | orchestrator | 2026-03-07 00:29:31.117819 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-07 00:29:31.117825 | orchestrator | Saturday 07 March 2026 00:29:17 +0000 (0:00:01.555) 0:03:40.544 ******** 2026-03-07 00:29:31.117831 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-07 00:29:31.117836 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:29:31.117842 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-07 00:29:31.117847 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:29:31.117853 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-07 00:29:31.117876 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:29:31.117882 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-07 00:29:31.117887 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:29:31.117893 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-07 00:29:31.117899 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-07 00:29:31.117904 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-07 00:29:31.117910 | orchestrator | 2026-03-07 00:29:31.117915 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-07 00:29:31.117921 | orchestrator | Saturday 07 March 2026 00:29:18 +0000 (0:00:01.533) 0:03:42.078 ******** 2026-03-07 00:29:31.117927 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:29:31.117933 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:29:31.117938 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:29:31.117944 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:29:31.117949 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:29:31.117954 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:29:31.117960 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:29:31.117965 | orchestrator | 2026-03-07 00:29:31.117971 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-07 00:29:31.117976 | orchestrator | Saturday 07 March 2026 00:29:19 +0000 (0:00:00.353) 0:03:42.431 ******** 2026-03-07 00:29:31.117982 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:31.117988 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:31.117994 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:31.118078 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:31.118086 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:31.118092 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:31.118098 | orchestrator | ok: [testbed-manager] 2026-03-07 00:29:31.118103 | orchestrator | 2026-03-07 00:29:31.118109 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-07 00:29:31.118114 | orchestrator | Saturday 07 March 2026 00:29:25 +0000 (0:00:06.036) 0:03:48.467 ******** 2026-03-07 00:29:31.118120 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-07 00:29:31.118126 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-07 00:29:31.118132 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:29:31.118147 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-07 00:29:31.118152 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:29:31.118158 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-07 00:29:31.118163 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:29:31.118177 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:29:31.118183 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-07 00:29:31.118190 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:29:31.118196 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-07 00:29:31.118203 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:29:31.118210 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-07 00:29:31.118216 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:29:31.118222 | orchestrator | 2026-03-07 00:29:31.118228 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-07 00:29:31.118235 | orchestrator | Saturday 07 March 2026 00:29:25 +0000 (0:00:00.469) 0:03:48.937 ******** 2026-03-07 00:29:31.118241 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-07 00:29:31.118247 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-07 00:29:31.118254 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-07 00:29:31.118273 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-07 00:29:31.118280 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-07 00:29:31.118286 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-07 00:29:31.118298 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-07 00:29:31.118305 | orchestrator | 2026-03-07 00:29:31.118311 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-07 00:29:31.118318 | orchestrator | Saturday 07 March 2026 00:29:26 +0000 (0:00:01.045) 0:03:49.983 ******** 2026-03-07 00:29:31.118325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:29:31.118333 | orchestrator | 2026-03-07 00:29:31.118340 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-07 00:29:31.118347 | orchestrator | Saturday 07 March 2026 00:29:27 +0000 (0:00:00.499) 0:03:50.483 ******** 2026-03-07 00:29:31.118353 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:31.118359 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:31.118366 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:31.118372 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:31.118378 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:31.118385 | orchestrator | ok: [testbed-manager] 2026-03-07 00:29:31.118391 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:31.118397 | orchestrator | 2026-03-07 00:29:31.118403 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-07 00:29:31.118410 | orchestrator | Saturday 07 March 2026 00:29:28 +0000 (0:00:01.400) 0:03:51.883 ******** 2026-03-07 00:29:31.118416 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:31.118422 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:31.118428 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:31.118435 | orchestrator | ok: [testbed-manager] 2026-03-07 00:29:31.118441 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:31.118447 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:31.118453 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:31.118460 | orchestrator | 2026-03-07 00:29:31.118466 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-07 00:29:31.118473 | orchestrator | Saturday 07 March 2026 00:29:29 +0000 (0:00:00.599) 0:03:52.482 ******** 2026-03-07 00:29:31.118480 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:29:31.118501 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:29:31.118507 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:29:31.118514 | orchestrator | changed: [testbed-manager] 2026-03-07 00:29:31.118521 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:29:31.118527 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:29:31.118533 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:29:31.118540 | orchestrator | 2026-03-07 00:29:31.118547 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-07 00:29:31.118553 | orchestrator | Saturday 07 March 2026 00:29:29 +0000 (0:00:00.678) 0:03:53.161 ******** 2026-03-07 00:29:31.118560 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:31.118566 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:31.118573 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:31.118579 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:31.118586 | orchestrator | ok: [testbed-manager] 2026-03-07 00:29:31.118592 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:31.118599 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:31.118604 | orchestrator | 2026-03-07 00:29:31.118610 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-07 00:29:31.118615 | orchestrator | Saturday 07 March 2026 00:29:30 +0000 (0:00:00.655) 0:03:53.816 ******** 2026-03-07 00:29:31.118623 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841929.8810093, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:31.118639 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841935.2348425, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:31.118645 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841896.8901427, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:31.118658 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841917.9735675, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:36.550960 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841931.7305012, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:36.551114 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841919.3251493, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:36.551123 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841920.307728, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:36.551127 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:36.551147 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:36.551163 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:36.551167 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:36.551189 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:36.551194 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:36.551198 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:29:36.551202 | orchestrator | 2026-03-07 00:29:36.551208 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-07 00:29:36.551213 | orchestrator | Saturday 07 March 2026 00:29:31 +0000 (0:00:00.970) 0:03:54.787 ******** 2026-03-07 00:29:36.551218 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:29:36.551223 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:29:36.551232 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:29:36.551236 | orchestrator | changed: [testbed-manager] 2026-03-07 00:29:36.551240 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:29:36.551243 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:29:36.551247 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:29:36.551251 | orchestrator | 2026-03-07 00:29:36.551256 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-07 00:29:36.551260 | orchestrator | Saturday 07 March 2026 00:29:32 +0000 (0:00:01.098) 0:03:55.886 ******** 2026-03-07 00:29:36.551264 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:29:36.551268 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:29:36.551271 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:29:36.551275 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:29:36.551279 | orchestrator | changed: [testbed-manager] 2026-03-07 00:29:36.551283 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:29:36.551287 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:29:36.551291 | orchestrator | 2026-03-07 00:29:36.551295 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-07 00:29:36.551299 | orchestrator | Saturday 07 March 2026 00:29:33 +0000 (0:00:01.212) 0:03:57.098 ******** 2026-03-07 00:29:36.551303 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:29:36.551307 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:29:36.551311 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:29:36.551314 | orchestrator | changed: [testbed-manager] 2026-03-07 00:29:36.551318 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:29:36.551322 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:29:36.551326 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:29:36.551330 | orchestrator | 2026-03-07 00:29:36.551334 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-07 00:29:36.551342 | orchestrator | Saturday 07 March 2026 00:29:34 +0000 (0:00:01.137) 0:03:58.236 ******** 2026-03-07 00:29:36.551346 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:29:36.551350 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:29:36.551354 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:29:36.551358 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:29:36.551362 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:29:36.551366 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:29:36.551370 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:29:36.551373 | orchestrator | 2026-03-07 00:29:36.551377 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-07 00:29:36.551381 | orchestrator | Saturday 07 March 2026 00:29:35 +0000 (0:00:00.305) 0:03:58.541 ******** 2026-03-07 00:29:36.551385 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:29:36.551390 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:29:36.551394 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:29:36.551398 | orchestrator | ok: [testbed-manager] 2026-03-07 00:29:36.551402 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:29:36.551406 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:29:36.551410 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:29:36.551414 | orchestrator | 2026-03-07 00:29:36.551418 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-07 00:29:36.551422 | orchestrator | Saturday 07 March 2026 00:29:36 +0000 (0:00:00.767) 0:03:59.308 ******** 2026-03-07 00:29:36.551427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:29:36.551433 | orchestrator | 2026-03-07 00:29:36.551437 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-07 00:29:36.551444 | orchestrator | Saturday 07 March 2026 00:29:36 +0000 (0:00:00.479) 0:03:59.788 ******** 2026-03-07 00:30:54.730565 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:54.730667 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:54.730677 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:54.730706 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:54.730713 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:54.730720 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:54.730727 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:54.730735 | orchestrator | 2026-03-07 00:30:54.730743 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-07 00:30:54.730751 | orchestrator | Saturday 07 March 2026 00:29:44 +0000 (0:00:07.981) 0:04:07.769 ******** 2026-03-07 00:30:54.730758 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:54.730765 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:54.730772 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:54.730779 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:54.730786 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:54.730792 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:54.730799 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:54.730806 | orchestrator | 2026-03-07 00:30:54.730813 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-07 00:30:54.730820 | orchestrator | Saturday 07 March 2026 00:29:45 +0000 (0:00:01.387) 0:04:09.157 ******** 2026-03-07 00:30:54.730826 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:54.730833 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:54.730840 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:54.730846 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:54.730853 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:54.730860 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:54.730933 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:54.730943 | orchestrator | 2026-03-07 00:30:54.730950 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-07 00:30:54.730957 | orchestrator | Saturday 07 March 2026 00:29:46 +0000 (0:00:01.017) 0:04:10.175 ******** 2026-03-07 00:30:54.730964 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:54.730970 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:54.730977 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:54.730984 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:54.730991 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:54.730997 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:54.731004 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:54.731011 | orchestrator | 2026-03-07 00:30:54.731017 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-07 00:30:54.731025 | orchestrator | Saturday 07 March 2026 00:29:47 +0000 (0:00:00.353) 0:04:10.528 ******** 2026-03-07 00:30:54.731032 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:54.731039 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:54.731045 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:54.731052 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:54.731059 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:54.731066 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:54.731072 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:54.731079 | orchestrator | 2026-03-07 00:30:54.731086 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-07 00:30:54.731092 | orchestrator | Saturday 07 March 2026 00:29:47 +0000 (0:00:00.362) 0:04:10.891 ******** 2026-03-07 00:30:54.731099 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:54.731106 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:54.731113 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:54.731119 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:54.731126 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:54.731133 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:54.731139 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:54.731146 | orchestrator | 2026-03-07 00:30:54.731153 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-07 00:30:54.731160 | orchestrator | Saturday 07 March 2026 00:29:48 +0000 (0:00:00.376) 0:04:11.268 ******** 2026-03-07 00:30:54.731167 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:54.731174 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:54.731180 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:54.731193 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:54.731200 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:54.731207 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:54.731214 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:54.731220 | orchestrator | 2026-03-07 00:30:54.731227 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-07 00:30:54.731234 | orchestrator | Saturday 07 March 2026 00:29:53 +0000 (0:00:05.625) 0:04:16.893 ******** 2026-03-07 00:30:54.731243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:30:54.731251 | orchestrator | 2026-03-07 00:30:54.731258 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-07 00:30:54.731265 | orchestrator | Saturday 07 March 2026 00:29:54 +0000 (0:00:00.488) 0:04:17.382 ******** 2026-03-07 00:30:54.731272 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-07 00:30:54.731279 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-07 00:30:54.731286 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-07 00:30:54.731293 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-07 00:30:54.731300 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:30:54.731306 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-07 00:30:54.731313 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-07 00:30:54.731320 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:30:54.731326 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-07 00:30:54.731333 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:30:54.731340 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-07 00:30:54.731347 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:30:54.731354 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-07 00:30:54.731361 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-07 00:30:54.731368 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-07 00:30:54.731375 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-07 00:30:54.731396 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:30:54.731403 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:30:54.731410 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-07 00:30:54.731417 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-07 00:30:54.731424 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:30:54.731431 | orchestrator | 2026-03-07 00:30:54.731438 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-07 00:30:54.731445 | orchestrator | Saturday 07 March 2026 00:29:54 +0000 (0:00:00.411) 0:04:17.794 ******** 2026-03-07 00:30:54.731452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:30:54.731459 | orchestrator | 2026-03-07 00:30:54.731466 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-07 00:30:54.731473 | orchestrator | Saturday 07 March 2026 00:29:55 +0000 (0:00:00.503) 0:04:18.298 ******** 2026-03-07 00:30:54.731480 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-07 00:30:54.731487 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:30:54.731494 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-07 00:30:54.731500 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:30:54.731507 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-07 00:30:54.731514 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:30:54.731527 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-07 00:30:54.731534 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:30:54.731541 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-07 00:30:54.731564 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-07 00:30:54.731571 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:30:54.731578 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:30:54.731585 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-07 00:30:54.731591 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:30:54.731598 | orchestrator | 2026-03-07 00:30:54.731604 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-07 00:30:54.731611 | orchestrator | Saturday 07 March 2026 00:29:55 +0000 (0:00:00.401) 0:04:18.699 ******** 2026-03-07 00:30:54.731618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:30:54.731625 | orchestrator | 2026-03-07 00:30:54.731631 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-07 00:30:54.731638 | orchestrator | Saturday 07 March 2026 00:29:56 +0000 (0:00:00.580) 0:04:19.279 ******** 2026-03-07 00:30:54.731645 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:54.731652 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:54.731658 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:54.731665 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:54.731671 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:54.731678 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:54.731685 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:54.731691 | orchestrator | 2026-03-07 00:30:54.731698 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-07 00:30:54.731705 | orchestrator | Saturday 07 March 2026 00:30:31 +0000 (0:00:35.382) 0:04:54.661 ******** 2026-03-07 00:30:54.731711 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:54.731718 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:54.731725 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:54.731731 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:54.731738 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:54.731744 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:54.731754 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:54.731761 | orchestrator | 2026-03-07 00:30:54.731767 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-07 00:30:54.731774 | orchestrator | Saturday 07 March 2026 00:30:39 +0000 (0:00:07.984) 0:05:02.646 ******** 2026-03-07 00:30:54.731781 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:54.731787 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:54.731794 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:54.731801 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:54.731807 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:54.731814 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:54.731821 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:54.731828 | orchestrator | 2026-03-07 00:30:54.731834 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-07 00:30:54.731841 | orchestrator | Saturday 07 March 2026 00:30:47 +0000 (0:00:07.657) 0:05:10.304 ******** 2026-03-07 00:30:54.731847 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:54.731854 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:54.731861 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:54.731885 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:54.731892 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:54.731898 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:54.731905 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:54.731912 | orchestrator | 2026-03-07 00:30:54.731917 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-07 00:30:54.731928 | orchestrator | Saturday 07 March 2026 00:30:48 +0000 (0:00:01.658) 0:05:11.962 ******** 2026-03-07 00:30:54.731935 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:54.731941 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:54.731948 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:54.731955 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:54.731962 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:54.731968 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:54.731975 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:54.731981 | orchestrator | 2026-03-07 00:30:54.731993 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-07 00:31:06.373828 | orchestrator | Saturday 07 March 2026 00:30:54 +0000 (0:00:06.003) 0:05:17.965 ******** 2026-03-07 00:31:06.373975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:31:06.373989 | orchestrator | 2026-03-07 00:31:06.373998 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-07 00:31:06.374006 | orchestrator | Saturday 07 March 2026 00:30:55 +0000 (0:00:00.508) 0:05:18.474 ******** 2026-03-07 00:31:06.374043 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:31:06.374052 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:31:06.374059 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:31:06.374066 | orchestrator | changed: [testbed-manager] 2026-03-07 00:31:06.374072 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:31:06.374079 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:31:06.374085 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:31:06.374091 | orchestrator | 2026-03-07 00:31:06.374098 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-07 00:31:06.374105 | orchestrator | Saturday 07 March 2026 00:30:55 +0000 (0:00:00.745) 0:05:19.220 ******** 2026-03-07 00:31:06.374112 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:31:06.374120 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:31:06.374126 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:31:06.374133 | orchestrator | ok: [testbed-manager] 2026-03-07 00:31:06.374139 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:31:06.374145 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:31:06.374151 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:31:06.374157 | orchestrator | 2026-03-07 00:31:06.374164 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-07 00:31:06.374171 | orchestrator | Saturday 07 March 2026 00:30:57 +0000 (0:00:01.690) 0:05:20.910 ******** 2026-03-07 00:31:06.374177 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:31:06.374184 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:31:06.374190 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:31:06.374197 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:31:06.374204 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:31:06.374210 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:31:06.374216 | orchestrator | changed: [testbed-manager] 2026-03-07 00:31:06.374222 | orchestrator | 2026-03-07 00:31:06.374228 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-07 00:31:06.374235 | orchestrator | Saturday 07 March 2026 00:30:58 +0000 (0:00:00.830) 0:05:21.740 ******** 2026-03-07 00:31:06.374242 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:31:06.374248 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:31:06.374255 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:31:06.374261 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:31:06.374268 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:31:06.374275 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:31:06.374281 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:31:06.374288 | orchestrator | 2026-03-07 00:31:06.374294 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-07 00:31:06.374323 | orchestrator | Saturday 07 March 2026 00:30:58 +0000 (0:00:00.321) 0:05:22.062 ******** 2026-03-07 00:31:06.374329 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:31:06.374335 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:31:06.374341 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:31:06.374347 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:31:06.374353 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:31:06.374359 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:31:06.374364 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:31:06.374370 | orchestrator | 2026-03-07 00:31:06.374376 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-07 00:31:06.374382 | orchestrator | Saturday 07 March 2026 00:30:59 +0000 (0:00:00.432) 0:05:22.494 ******** 2026-03-07 00:31:06.374389 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:31:06.374394 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:31:06.374401 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:31:06.374407 | orchestrator | ok: [testbed-manager] 2026-03-07 00:31:06.374413 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:31:06.374433 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:31:06.374440 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:31:06.374445 | orchestrator | 2026-03-07 00:31:06.374451 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-07 00:31:06.374457 | orchestrator | Saturday 07 March 2026 00:30:59 +0000 (0:00:00.317) 0:05:22.812 ******** 2026-03-07 00:31:06.374463 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:31:06.374472 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:31:06.374479 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:31:06.374485 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:31:06.374490 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:31:06.374496 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:31:06.374503 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:31:06.374509 | orchestrator | 2026-03-07 00:31:06.374515 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-07 00:31:06.374522 | orchestrator | Saturday 07 March 2026 00:30:59 +0000 (0:00:00.320) 0:05:23.132 ******** 2026-03-07 00:31:06.374528 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:31:06.374534 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:31:06.374540 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:31:06.374547 | orchestrator | ok: [testbed-manager] 2026-03-07 00:31:06.374552 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:31:06.374559 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:31:06.374566 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:31:06.374572 | orchestrator | 2026-03-07 00:31:06.374579 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-07 00:31:06.374585 | orchestrator | Saturday 07 March 2026 00:31:00 +0000 (0:00:00.338) 0:05:23.471 ******** 2026-03-07 00:31:06.374592 | orchestrator | ok: [testbed-node-3] =>  2026-03-07 00:31:06.374598 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:31:06.374605 | orchestrator | ok: [testbed-node-4] =>  2026-03-07 00:31:06.374612 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:31:06.374623 | orchestrator | ok: [testbed-node-5] =>  2026-03-07 00:31:06.374634 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:31:06.374642 | orchestrator | ok: [testbed-manager] =>  2026-03-07 00:31:06.374650 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:31:06.374679 | orchestrator | ok: [testbed-node-0] =>  2026-03-07 00:31:06.374687 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:31:06.374694 | orchestrator | ok: [testbed-node-1] =>  2026-03-07 00:31:06.374700 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:31:06.374709 | orchestrator | ok: [testbed-node-2] =>  2026-03-07 00:31:06.374715 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:31:06.374722 | orchestrator | 2026-03-07 00:31:06.374731 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-07 00:31:06.374737 | orchestrator | Saturday 07 March 2026 00:31:00 +0000 (0:00:00.349) 0:05:23.820 ******** 2026-03-07 00:31:06.374754 | orchestrator | ok: [testbed-node-3] =>  2026-03-07 00:31:06.374761 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:31:06.374768 | orchestrator | ok: [testbed-node-4] =>  2026-03-07 00:31:06.374775 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:31:06.374781 | orchestrator | ok: [testbed-node-5] =>  2026-03-07 00:31:06.374788 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:31:06.374795 | orchestrator | ok: [testbed-manager] =>  2026-03-07 00:31:06.374801 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:31:06.374807 | orchestrator | ok: [testbed-node-0] =>  2026-03-07 00:31:06.374812 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:31:06.374818 | orchestrator | ok: [testbed-node-1] =>  2026-03-07 00:31:06.374824 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:31:06.374830 | orchestrator | ok: [testbed-node-2] =>  2026-03-07 00:31:06.374837 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:31:06.374843 | orchestrator | 2026-03-07 00:31:06.374876 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-07 00:31:06.374882 | orchestrator | Saturday 07 March 2026 00:31:00 +0000 (0:00:00.293) 0:05:24.114 ******** 2026-03-07 00:31:06.374888 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:31:06.374894 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:31:06.374900 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:31:06.374906 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:31:06.374913 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:31:06.374920 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:31:06.374927 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:31:06.374933 | orchestrator | 2026-03-07 00:31:06.374940 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-07 00:31:06.374946 | orchestrator | Saturday 07 March 2026 00:31:01 +0000 (0:00:00.319) 0:05:24.433 ******** 2026-03-07 00:31:06.374953 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:31:06.374958 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:31:06.374964 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:31:06.374971 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:31:06.374977 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:31:06.374983 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:31:06.374989 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:31:06.374995 | orchestrator | 2026-03-07 00:31:06.375001 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-07 00:31:06.375008 | orchestrator | Saturday 07 March 2026 00:31:01 +0000 (0:00:00.314) 0:05:24.748 ******** 2026-03-07 00:31:06.375016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:31:06.375024 | orchestrator | 2026-03-07 00:31:06.375030 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-07 00:31:06.375036 | orchestrator | Saturday 07 March 2026 00:31:02 +0000 (0:00:00.605) 0:05:25.354 ******** 2026-03-07 00:31:06.375042 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:31:06.375048 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:31:06.375054 | orchestrator | ok: [testbed-manager] 2026-03-07 00:31:06.375061 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:31:06.375067 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:31:06.375073 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:31:06.375079 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:31:06.375085 | orchestrator | 2026-03-07 00:31:06.375091 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-07 00:31:06.375097 | orchestrator | Saturday 07 March 2026 00:31:02 +0000 (0:00:00.812) 0:05:26.166 ******** 2026-03-07 00:31:06.375111 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:31:06.375117 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:31:06.375122 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:31:06.375128 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:31:06.375140 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:31:06.375154 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:31:06.375159 | orchestrator | ok: [testbed-manager] 2026-03-07 00:31:06.375165 | orchestrator | 2026-03-07 00:31:06.375171 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-07 00:31:06.375178 | orchestrator | Saturday 07 March 2026 00:31:05 +0000 (0:00:03.008) 0:05:29.175 ******** 2026-03-07 00:31:06.375184 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-07 00:31:06.375191 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-07 00:31:06.375197 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-07 00:31:06.375203 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:31:06.375208 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-07 00:31:06.375214 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-07 00:31:06.375220 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-07 00:31:06.375226 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:31:06.375232 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-07 00:31:06.375238 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-07 00:31:06.375243 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-07 00:31:06.375249 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:31:06.375254 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-07 00:31:06.375260 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-07 00:31:06.375266 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-07 00:31:06.375271 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-07 00:31:06.375286 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-07 00:32:06.036394 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-07 00:32:06.036514 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:06.036531 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-07 00:32:06.036544 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-07 00:32:06.036556 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-07 00:32:06.036567 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:06.036579 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:06.036591 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-07 00:32:06.036602 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-07 00:32:06.036614 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-07 00:32:06.036626 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:06.036638 | orchestrator | 2026-03-07 00:32:06.036651 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-07 00:32:06.036664 | orchestrator | Saturday 07 March 2026 00:31:06 +0000 (0:00:00.687) 0:05:29.862 ******** 2026-03-07 00:32:06.036676 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:06.036687 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:06.036698 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:06.036709 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:06.036720 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:06.036731 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:06.036811 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:06.036829 | orchestrator | 2026-03-07 00:32:06.036849 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-07 00:32:06.036861 | orchestrator | Saturday 07 March 2026 00:31:13 +0000 (0:00:06.492) 0:05:36.354 ******** 2026-03-07 00:32:06.036872 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:06.036883 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:06.036893 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:06.036904 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:06.036917 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:06.036958 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:06.036972 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:06.036985 | orchestrator | 2026-03-07 00:32:06.036998 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-07 00:32:06.037012 | orchestrator | Saturday 07 March 2026 00:31:14 +0000 (0:00:01.184) 0:05:37.539 ******** 2026-03-07 00:32:06.037025 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:06.037038 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:06.037050 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:06.037079 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:06.037093 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:06.037116 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:06.037129 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:06.037143 | orchestrator | 2026-03-07 00:32:06.037156 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-07 00:32:06.037170 | orchestrator | Saturday 07 March 2026 00:31:22 +0000 (0:00:08.102) 0:05:45.642 ******** 2026-03-07 00:32:06.037183 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:06.037195 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:06.037208 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:06.037221 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:06.037233 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:06.037246 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:06.037259 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:06.037272 | orchestrator | 2026-03-07 00:32:06.037285 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-07 00:32:06.037296 | orchestrator | Saturday 07 March 2026 00:31:25 +0000 (0:00:03.569) 0:05:49.211 ******** 2026-03-07 00:32:06.037307 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:06.037317 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:06.037328 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:06.037338 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:06.037349 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:06.037360 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:06.037370 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:06.037381 | orchestrator | 2026-03-07 00:32:06.037407 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-07 00:32:06.037418 | orchestrator | Saturday 07 March 2026 00:31:27 +0000 (0:00:01.499) 0:05:50.711 ******** 2026-03-07 00:32:06.037429 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:06.037439 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:06.037450 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:06.037461 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:06.037472 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:06.037482 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:06.037493 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:06.037503 | orchestrator | 2026-03-07 00:32:06.037514 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-07 00:32:06.037525 | orchestrator | Saturday 07 March 2026 00:31:29 +0000 (0:00:01.654) 0:05:52.365 ******** 2026-03-07 00:32:06.037536 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:06.037548 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:06.037558 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:06.037569 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:06.037580 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:06.037591 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:06.037601 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:06.037612 | orchestrator | 2026-03-07 00:32:06.037623 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-07 00:32:06.037634 | orchestrator | Saturday 07 March 2026 00:31:30 +0000 (0:00:00.956) 0:05:53.322 ******** 2026-03-07 00:32:06.037645 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:06.037655 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:06.037666 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:06.037686 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:06.037697 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:06.037707 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:06.037718 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:06.037729 | orchestrator | 2026-03-07 00:32:06.037766 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-07 00:32:06.037798 | orchestrator | Saturday 07 March 2026 00:31:39 +0000 (0:00:09.414) 0:06:02.737 ******** 2026-03-07 00:32:06.037809 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:06.037820 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:06.037831 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:06.037842 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:06.037853 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:06.037864 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:06.037874 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:06.037885 | orchestrator | 2026-03-07 00:32:06.037896 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-07 00:32:06.037907 | orchestrator | Saturday 07 March 2026 00:31:40 +0000 (0:00:00.940) 0:06:03.678 ******** 2026-03-07 00:32:06.037918 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:06.037928 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:06.037939 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:06.037950 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:06.037960 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:06.037971 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:06.037982 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:06.037992 | orchestrator | 2026-03-07 00:32:06.038003 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-07 00:32:06.038078 | orchestrator | Saturday 07 March 2026 00:31:48 +0000 (0:00:08.532) 0:06:12.210 ******** 2026-03-07 00:32:06.038091 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:06.038102 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:06.038113 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:06.038123 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:06.038134 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:06.038144 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:06.038155 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:06.038166 | orchestrator | 2026-03-07 00:32:06.038177 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-07 00:32:06.038188 | orchestrator | Saturday 07 March 2026 00:31:59 +0000 (0:00:10.317) 0:06:22.528 ******** 2026-03-07 00:32:06.038199 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-07 00:32:06.038210 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-07 00:32:06.038221 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-07 00:32:06.038232 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-07 00:32:06.038243 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-07 00:32:06.038253 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-07 00:32:06.038264 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-07 00:32:06.038275 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-07 00:32:06.038286 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-07 00:32:06.038297 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-07 00:32:06.038307 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-07 00:32:06.038318 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-07 00:32:06.038329 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-07 00:32:06.038340 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-07 00:32:06.038350 | orchestrator | 2026-03-07 00:32:06.038361 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-07 00:32:06.038372 | orchestrator | Saturday 07 March 2026 00:32:00 +0000 (0:00:01.290) 0:06:23.818 ******** 2026-03-07 00:32:06.038391 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:06.038402 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:06.038413 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:06.038424 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:06.038443 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:06.038461 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:06.038479 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:06.038499 | orchestrator | 2026-03-07 00:32:06.038519 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-07 00:32:06.038538 | orchestrator | Saturday 07 March 2026 00:32:01 +0000 (0:00:00.551) 0:06:24.369 ******** 2026-03-07 00:32:06.038550 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:06.038561 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:06.038572 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:06.038583 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:06.038593 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:06.038604 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:06.038614 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:06.038625 | orchestrator | 2026-03-07 00:32:06.038636 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-07 00:32:06.038648 | orchestrator | Saturday 07 March 2026 00:32:04 +0000 (0:00:03.794) 0:06:28.164 ******** 2026-03-07 00:32:06.038659 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:06.038670 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:06.038681 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:06.038691 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:06.038701 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:06.038712 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:06.038722 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:06.038733 | orchestrator | 2026-03-07 00:32:06.038774 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-07 00:32:06.038786 | orchestrator | Saturday 07 March 2026 00:32:05 +0000 (0:00:00.739) 0:06:28.904 ******** 2026-03-07 00:32:06.038797 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-07 00:32:06.038808 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-07 00:32:06.038863 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-07 00:32:06.038876 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-07 00:32:06.038886 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:06.038897 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-07 00:32:06.038908 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-07 00:32:06.038919 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:06.038930 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-07 00:32:06.038951 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-07 00:32:26.127389 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:26.127519 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-07 00:32:26.127540 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-07 00:32:26.127555 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:26.127568 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-07 00:32:26.127579 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-07 00:32:26.127586 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:26.127594 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:26.127602 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-07 00:32:26.127609 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-07 00:32:26.127617 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:26.127624 | orchestrator | 2026-03-07 00:32:26.127634 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-07 00:32:26.127665 | orchestrator | Saturday 07 March 2026 00:32:06 +0000 (0:00:00.677) 0:06:29.582 ******** 2026-03-07 00:32:26.127673 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:26.127680 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:26.127688 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:26.127740 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:26.127749 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:26.127757 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:26.127764 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:26.127771 | orchestrator | 2026-03-07 00:32:26.127778 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-07 00:32:26.127786 | orchestrator | Saturday 07 March 2026 00:32:06 +0000 (0:00:00.585) 0:06:30.167 ******** 2026-03-07 00:32:26.127793 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:26.127800 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:26.127808 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:26.127815 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:26.127822 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:26.127829 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:26.127836 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:26.127843 | orchestrator | 2026-03-07 00:32:26.127850 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-07 00:32:26.127858 | orchestrator | Saturday 07 March 2026 00:32:07 +0000 (0:00:00.613) 0:06:30.781 ******** 2026-03-07 00:32:26.127865 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:26.127872 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:26.127879 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:26.127886 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:26.127893 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:26.127900 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:26.127907 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:26.127915 | orchestrator | 2026-03-07 00:32:26.127923 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-07 00:32:26.127932 | orchestrator | Saturday 07 March 2026 00:32:08 +0000 (0:00:00.567) 0:06:31.348 ******** 2026-03-07 00:32:26.127940 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:26.127949 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:26.127957 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:26.127966 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:26.127975 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:26.127983 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:26.127990 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:26.127997 | orchestrator | 2026-03-07 00:32:26.128004 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-07 00:32:26.128011 | orchestrator | Saturday 07 March 2026 00:32:10 +0000 (0:00:02.014) 0:06:33.363 ******** 2026-03-07 00:32:26.128020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:32:26.128030 | orchestrator | 2026-03-07 00:32:26.128049 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-07 00:32:26.128057 | orchestrator | Saturday 07 March 2026 00:32:11 +0000 (0:00:00.918) 0:06:34.281 ******** 2026-03-07 00:32:26.128064 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:26.128071 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:26.128078 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:26.128085 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:26.128094 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:26.128102 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:26.128111 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:26.128119 | orchestrator | 2026-03-07 00:32:26.128128 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-07 00:32:26.128144 | orchestrator | Saturday 07 March 2026 00:32:11 +0000 (0:00:00.887) 0:06:35.168 ******** 2026-03-07 00:32:26.128153 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:26.128161 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:26.128170 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:26.128178 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:26.128187 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:26.128195 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:26.128204 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:26.128212 | orchestrator | 2026-03-07 00:32:26.128221 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-07 00:32:26.128230 | orchestrator | Saturday 07 March 2026 00:32:13 +0000 (0:00:01.194) 0:06:36.362 ******** 2026-03-07 00:32:26.128239 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:26.128247 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:26.128256 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:26.128264 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:26.128273 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:26.128281 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:26.128289 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:26.128298 | orchestrator | 2026-03-07 00:32:26.128307 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-07 00:32:26.128332 | orchestrator | Saturday 07 March 2026 00:32:14 +0000 (0:00:01.487) 0:06:37.850 ******** 2026-03-07 00:32:26.128342 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:26.128351 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:26.128359 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:26.128368 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:26.128377 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:26.128385 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:26.128394 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:26.128402 | orchestrator | 2026-03-07 00:32:26.128411 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-07 00:32:26.128420 | orchestrator | Saturday 07 March 2026 00:32:16 +0000 (0:00:01.390) 0:06:39.241 ******** 2026-03-07 00:32:26.128429 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:26.128437 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:26.128446 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:26.128454 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:26.128463 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:26.128472 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:26.128480 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:26.128489 | orchestrator | 2026-03-07 00:32:26.128498 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-07 00:32:26.128507 | orchestrator | Saturday 07 March 2026 00:32:17 +0000 (0:00:01.409) 0:06:40.650 ******** 2026-03-07 00:32:26.128515 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:26.128524 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:26.128532 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:26.128541 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:26.128550 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:26.128558 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:26.128567 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:26.128576 | orchestrator | 2026-03-07 00:32:26.128584 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-07 00:32:26.128593 | orchestrator | Saturday 07 March 2026 00:32:18 +0000 (0:00:01.446) 0:06:42.096 ******** 2026-03-07 00:32:26.128602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:32:26.128611 | orchestrator | 2026-03-07 00:32:26.128620 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-07 00:32:26.128640 | orchestrator | Saturday 07 March 2026 00:32:19 +0000 (0:00:01.105) 0:06:43.202 ******** 2026-03-07 00:32:26.128649 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:26.128658 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:26.128666 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:26.128675 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:26.128684 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:26.128693 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:26.128743 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:26.128752 | orchestrator | 2026-03-07 00:32:26.128761 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-07 00:32:26.128770 | orchestrator | Saturday 07 March 2026 00:32:21 +0000 (0:00:01.353) 0:06:44.555 ******** 2026-03-07 00:32:26.128778 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:26.128787 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:26.128796 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:26.128804 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:26.128813 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:26.128822 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:26.128830 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:26.128838 | orchestrator | 2026-03-07 00:32:26.128861 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-07 00:32:26.128870 | orchestrator | Saturday 07 March 2026 00:32:22 +0000 (0:00:01.123) 0:06:45.678 ******** 2026-03-07 00:32:26.128889 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:26.128898 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:26.128906 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:26.128915 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:26.128923 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:26.128932 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:26.128941 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:26.128949 | orchestrator | 2026-03-07 00:32:26.128958 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-07 00:32:26.128967 | orchestrator | Saturday 07 March 2026 00:32:23 +0000 (0:00:01.125) 0:06:46.804 ******** 2026-03-07 00:32:26.128976 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:26.128984 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:26.128993 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:26.129002 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:26.129010 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:26.129018 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:26.129027 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:26.129036 | orchestrator | 2026-03-07 00:32:26.129067 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-07 00:32:26.129077 | orchestrator | Saturday 07 March 2026 00:32:24 +0000 (0:00:01.363) 0:06:48.168 ******** 2026-03-07 00:32:26.129086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:32:26.129095 | orchestrator | 2026-03-07 00:32:26.129104 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:32:26.129113 | orchestrator | Saturday 07 March 2026 00:32:25 +0000 (0:00:01.001) 0:06:49.169 ******** 2026-03-07 00:32:26.129122 | orchestrator | 2026-03-07 00:32:26.129130 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:32:26.129139 | orchestrator | Saturday 07 March 2026 00:32:25 +0000 (0:00:00.049) 0:06:49.219 ******** 2026-03-07 00:32:26.129148 | orchestrator | 2026-03-07 00:32:26.129169 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:32:26.129178 | orchestrator | Saturday 07 March 2026 00:32:26 +0000 (0:00:00.040) 0:06:49.260 ******** 2026-03-07 00:32:26.129187 | orchestrator | 2026-03-07 00:32:26.129196 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:32:26.129204 | orchestrator | Saturday 07 March 2026 00:32:26 +0000 (0:00:00.048) 0:06:49.308 ******** 2026-03-07 00:32:26.129213 | orchestrator | 2026-03-07 00:32:26.129246 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:32:53.377756 | orchestrator | Saturday 07 March 2026 00:32:26 +0000 (0:00:00.051) 0:06:49.360 ******** 2026-03-07 00:32:53.377856 | orchestrator | 2026-03-07 00:32:53.377866 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:32:53.377874 | orchestrator | Saturday 07 March 2026 00:32:26 +0000 (0:00:00.045) 0:06:49.405 ******** 2026-03-07 00:32:53.377880 | orchestrator | 2026-03-07 00:32:53.377887 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:32:53.377894 | orchestrator | Saturday 07 March 2026 00:32:26 +0000 (0:00:00.042) 0:06:49.448 ******** 2026-03-07 00:32:53.377900 | orchestrator | 2026-03-07 00:32:53.377906 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-07 00:32:53.377913 | orchestrator | Saturday 07 March 2026 00:32:26 +0000 (0:00:00.051) 0:06:49.499 ******** 2026-03-07 00:32:53.377919 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:53.377927 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:53.377934 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:53.377940 | orchestrator | 2026-03-07 00:32:53.377946 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-07 00:32:53.377953 | orchestrator | Saturday 07 March 2026 00:32:27 +0000 (0:00:01.138) 0:06:50.637 ******** 2026-03-07 00:32:53.377960 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:53.377968 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:53.377975 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:53.377981 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:53.377987 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:53.377994 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:53.378001 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:53.378007 | orchestrator | 2026-03-07 00:32:53.378054 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-07 00:32:53.378061 | orchestrator | Saturday 07 March 2026 00:32:28 +0000 (0:00:01.530) 0:06:52.168 ******** 2026-03-07 00:32:53.378067 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:53.378074 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:53.378080 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:53.378085 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:53.378091 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:53.378097 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:53.378104 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:53.378110 | orchestrator | 2026-03-07 00:32:53.378116 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-07 00:32:53.378122 | orchestrator | Saturday 07 March 2026 00:32:30 +0000 (0:00:01.257) 0:06:53.425 ******** 2026-03-07 00:32:53.378128 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:53.378135 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:53.378140 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:53.378146 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:53.378153 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:53.378159 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:53.378166 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:53.378172 | orchestrator | 2026-03-07 00:32:53.378178 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-07 00:32:53.378184 | orchestrator | Saturday 07 March 2026 00:32:32 +0000 (0:00:02.353) 0:06:55.779 ******** 2026-03-07 00:32:53.378190 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:53.378196 | orchestrator | 2026-03-07 00:32:53.378203 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-07 00:32:53.378209 | orchestrator | Saturday 07 March 2026 00:32:32 +0000 (0:00:00.092) 0:06:55.872 ******** 2026-03-07 00:32:53.378215 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:53.378222 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:53.378228 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:53.378235 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:53.378264 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:53.378272 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:53.378281 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:53.378290 | orchestrator | 2026-03-07 00:32:53.378312 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-07 00:32:53.378322 | orchestrator | Saturday 07 March 2026 00:32:33 +0000 (0:00:01.081) 0:06:56.953 ******** 2026-03-07 00:32:53.378331 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:53.378340 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:53.378349 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:53.378358 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:53.378367 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:53.378378 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:53.378386 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:53.378393 | orchestrator | 2026-03-07 00:32:53.378401 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-07 00:32:53.378409 | orchestrator | Saturday 07 March 2026 00:32:34 +0000 (0:00:00.793) 0:06:57.747 ******** 2026-03-07 00:32:53.378420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:32:53.378432 | orchestrator | 2026-03-07 00:32:53.378441 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-07 00:32:53.378451 | orchestrator | Saturday 07 March 2026 00:32:35 +0000 (0:00:00.986) 0:06:58.733 ******** 2026-03-07 00:32:53.378461 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:53.378472 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:53.378482 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:53.378491 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:53.378496 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:53.378502 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:53.378509 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:53.378516 | orchestrator | 2026-03-07 00:32:53.378522 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-07 00:32:53.378528 | orchestrator | Saturday 07 March 2026 00:32:36 +0000 (0:00:00.853) 0:06:59.587 ******** 2026-03-07 00:32:53.378533 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-07 00:32:53.378539 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-07 00:32:53.378561 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-07 00:32:53.378567 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-07 00:32:53.378573 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-07 00:32:53.378579 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-07 00:32:53.378585 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-07 00:32:53.378590 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-07 00:32:53.378596 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-07 00:32:53.378602 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-07 00:32:53.378608 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-07 00:32:53.378614 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-07 00:32:53.378619 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-07 00:32:53.378625 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-07 00:32:53.378631 | orchestrator | 2026-03-07 00:32:53.378636 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-07 00:32:53.378657 | orchestrator | Saturday 07 March 2026 00:32:39 +0000 (0:00:02.842) 0:07:02.429 ******** 2026-03-07 00:32:53.378663 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:53.378669 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:53.378675 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:53.378687 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:53.378693 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:53.378698 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:53.378704 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:53.378710 | orchestrator | 2026-03-07 00:32:53.378716 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-07 00:32:53.378721 | orchestrator | Saturday 07 March 2026 00:32:39 +0000 (0:00:00.621) 0:07:03.051 ******** 2026-03-07 00:32:53.378730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:32:53.378737 | orchestrator | 2026-03-07 00:32:53.378743 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-07 00:32:53.378749 | orchestrator | Saturday 07 March 2026 00:32:40 +0000 (0:00:00.962) 0:07:04.013 ******** 2026-03-07 00:32:53.378755 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:53.378761 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:53.378766 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:53.378772 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:53.378777 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:53.378783 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:53.378789 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:53.378795 | orchestrator | 2026-03-07 00:32:53.378800 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-07 00:32:53.378806 | orchestrator | Saturday 07 March 2026 00:32:41 +0000 (0:00:00.862) 0:07:04.875 ******** 2026-03-07 00:32:53.378811 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:53.378817 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:53.378822 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:53.378828 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:53.378834 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:53.378839 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:53.378845 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:53.378850 | orchestrator | 2026-03-07 00:32:53.378856 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-07 00:32:53.378861 | orchestrator | Saturday 07 March 2026 00:32:42 +0000 (0:00:01.143) 0:07:06.019 ******** 2026-03-07 00:32:53.378867 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:53.378873 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:53.378883 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:53.378888 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:53.378894 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:53.378899 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:53.378905 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:53.378911 | orchestrator | 2026-03-07 00:32:53.378916 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-07 00:32:53.378922 | orchestrator | Saturday 07 March 2026 00:32:43 +0000 (0:00:00.537) 0:07:06.556 ******** 2026-03-07 00:32:53.378928 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:53.378933 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:53.378939 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:53.378946 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:53.378952 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:53.378957 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:53.378963 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:53.378969 | orchestrator | 2026-03-07 00:32:53.378974 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-07 00:32:53.378980 | orchestrator | Saturday 07 March 2026 00:32:44 +0000 (0:00:01.587) 0:07:08.144 ******** 2026-03-07 00:32:53.378986 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:53.378992 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:53.378997 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:53.379002 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:53.379013 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:53.379019 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:53.379024 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:53.379030 | orchestrator | 2026-03-07 00:32:53.379036 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-07 00:32:53.379042 | orchestrator | Saturday 07 March 2026 00:32:45 +0000 (0:00:00.555) 0:07:08.699 ******** 2026-03-07 00:32:53.379048 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:53.379053 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:53.379059 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:53.379064 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:53.379070 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:53.379076 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:53.379081 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:53.379087 | orchestrator | 2026-03-07 00:32:53.379097 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-07 00:33:26.679867 | orchestrator | Saturday 07 March 2026 00:32:53 +0000 (0:00:07.911) 0:07:16.611 ******** 2026-03-07 00:33:26.679972 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:26.679987 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:26.679996 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:26.680004 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:26.680013 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:26.680022 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:26.680029 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:26.680037 | orchestrator | 2026-03-07 00:33:26.680045 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-07 00:33:26.680053 | orchestrator | Saturday 07 March 2026 00:32:54 +0000 (0:00:01.629) 0:07:18.240 ******** 2026-03-07 00:33:26.680061 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:26.680069 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:26.680076 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:26.680084 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:26.680092 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:26.680100 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:26.680107 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:26.680115 | orchestrator | 2026-03-07 00:33:26.680123 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-07 00:33:26.680142 | orchestrator | Saturday 07 March 2026 00:32:56 +0000 (0:00:01.730) 0:07:19.971 ******** 2026-03-07 00:33:26.680150 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:26.680157 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:26.680165 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:26.680172 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:26.680179 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:26.680186 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:26.680193 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:26.680200 | orchestrator | 2026-03-07 00:33:26.680208 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-07 00:33:26.680215 | orchestrator | Saturday 07 March 2026 00:32:58 +0000 (0:00:01.798) 0:07:21.769 ******** 2026-03-07 00:33:26.680222 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:26.680230 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:26.680237 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:26.680244 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:26.680251 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:26.680259 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:26.680266 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:26.680273 | orchestrator | 2026-03-07 00:33:26.680280 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-07 00:33:26.680288 | orchestrator | Saturday 07 March 2026 00:32:59 +0000 (0:00:01.101) 0:07:22.870 ******** 2026-03-07 00:33:26.680295 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:26.680302 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:26.680330 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:26.680338 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:26.680345 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:26.680353 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:26.680365 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:26.680378 | orchestrator | 2026-03-07 00:33:26.680387 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-07 00:33:26.680400 | orchestrator | Saturday 07 March 2026 00:33:00 +0000 (0:00:00.873) 0:07:23.744 ******** 2026-03-07 00:33:26.680410 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:26.680421 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:26.680431 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:26.680444 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:26.680452 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:26.680460 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:26.680468 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:26.680477 | orchestrator | 2026-03-07 00:33:26.680485 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-07 00:33:26.680494 | orchestrator | Saturday 07 March 2026 00:33:01 +0000 (0:00:00.562) 0:07:24.307 ******** 2026-03-07 00:33:26.680502 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:26.680511 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:26.680519 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:26.680528 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:26.680536 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:26.680544 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:26.680552 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:26.680560 | orchestrator | 2026-03-07 00:33:26.680569 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-07 00:33:26.680577 | orchestrator | Saturday 07 March 2026 00:33:01 +0000 (0:00:00.599) 0:07:24.906 ******** 2026-03-07 00:33:26.680602 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:26.680611 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:26.680620 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:26.680628 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:26.680636 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:26.680649 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:26.680664 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:26.680682 | orchestrator | 2026-03-07 00:33:26.680694 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-07 00:33:26.680705 | orchestrator | Saturday 07 March 2026 00:33:02 +0000 (0:00:00.782) 0:07:25.689 ******** 2026-03-07 00:33:26.680717 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:26.680729 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:26.680740 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:26.680750 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:26.680763 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:26.680775 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:26.680788 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:26.680799 | orchestrator | 2026-03-07 00:33:26.680810 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-07 00:33:26.680822 | orchestrator | Saturday 07 March 2026 00:33:03 +0000 (0:00:00.652) 0:07:26.341 ******** 2026-03-07 00:33:26.680833 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:26.680845 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:26.680856 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:26.680868 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:26.680880 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:26.680893 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:26.680904 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:26.680917 | orchestrator | 2026-03-07 00:33:26.680925 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-07 00:33:26.680947 | orchestrator | Saturday 07 March 2026 00:33:08 +0000 (0:00:05.545) 0:07:31.886 ******** 2026-03-07 00:33:26.680955 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:26.680973 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:26.680980 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:26.681004 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:26.681013 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:26.681025 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:26.681036 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:26.681048 | orchestrator | 2026-03-07 00:33:26.681060 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-07 00:33:26.681071 | orchestrator | Saturday 07 March 2026 00:33:09 +0000 (0:00:00.673) 0:07:32.560 ******** 2026-03-07 00:33:26.681084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:33:26.681095 | orchestrator | 2026-03-07 00:33:26.681102 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-07 00:33:26.681109 | orchestrator | Saturday 07 March 2026 00:33:10 +0000 (0:00:01.093) 0:07:33.654 ******** 2026-03-07 00:33:26.681116 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:26.681123 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:26.681130 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:26.681138 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:26.681145 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:26.681152 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:26.681159 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:26.681166 | orchestrator | 2026-03-07 00:33:26.681173 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-07 00:33:26.681181 | orchestrator | Saturday 07 March 2026 00:33:12 +0000 (0:00:01.910) 0:07:35.564 ******** 2026-03-07 00:33:26.681188 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:26.681195 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:26.681202 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:26.681209 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:26.681216 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:26.681223 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:26.681231 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:26.681238 | orchestrator | 2026-03-07 00:33:26.681245 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-07 00:33:26.681252 | orchestrator | Saturday 07 March 2026 00:33:13 +0000 (0:00:01.276) 0:07:36.840 ******** 2026-03-07 00:33:26.681260 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:26.681267 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:26.681274 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:26.681281 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:26.681288 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:26.681295 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:26.681307 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:26.681318 | orchestrator | 2026-03-07 00:33:26.681330 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-07 00:33:26.681342 | orchestrator | Saturday 07 March 2026 00:33:14 +0000 (0:00:00.895) 0:07:37.736 ******** 2026-03-07 00:33:26.681354 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:33:26.681369 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:33:26.681379 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:33:26.681410 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:33:26.681424 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:33:26.681445 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:33:26.681458 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:33:26.681470 | orchestrator | 2026-03-07 00:33:26.681483 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-07 00:33:26.681495 | orchestrator | Saturday 07 March 2026 00:33:16 +0000 (0:00:01.982) 0:07:39.719 ******** 2026-03-07 00:33:26.681508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:33:26.681521 | orchestrator | 2026-03-07 00:33:26.681533 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-07 00:33:26.681545 | orchestrator | Saturday 07 March 2026 00:33:17 +0000 (0:00:00.872) 0:07:40.591 ******** 2026-03-07 00:33:26.681558 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:26.681571 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:26.681604 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:26.681617 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:26.681625 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:26.681633 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:26.681640 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:26.681647 | orchestrator | 2026-03-07 00:33:26.681654 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-07 00:33:26.681671 | orchestrator | Saturday 07 March 2026 00:33:26 +0000 (0:00:09.321) 0:07:49.912 ******** 2026-03-07 00:33:57.671749 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:57.671893 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:57.671915 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:57.671928 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:57.671939 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:57.671950 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:57.671961 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:57.671972 | orchestrator | 2026-03-07 00:33:57.671984 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-07 00:33:57.671997 | orchestrator | Saturday 07 March 2026 00:33:28 +0000 (0:00:02.041) 0:07:51.953 ******** 2026-03-07 00:33:57.672008 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:57.672019 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:57.672030 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:57.672041 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:57.672053 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:57.672063 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:57.672074 | orchestrator | 2026-03-07 00:33:57.672085 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-07 00:33:57.672096 | orchestrator | Saturday 07 March 2026 00:33:29 +0000 (0:00:01.255) 0:07:53.208 ******** 2026-03-07 00:33:57.672107 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:57.672120 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:57.672131 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:57.672142 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:57.672153 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:57.672163 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:57.672174 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:57.672185 | orchestrator | 2026-03-07 00:33:57.672199 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-07 00:33:57.672212 | orchestrator | 2026-03-07 00:33:57.672224 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-07 00:33:57.672244 | orchestrator | Saturday 07 March 2026 00:33:31 +0000 (0:00:01.248) 0:07:54.457 ******** 2026-03-07 00:33:57.672262 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:57.672314 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:57.672335 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:57.672355 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:57.672375 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:57.672389 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:57.672402 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:57.672415 | orchestrator | 2026-03-07 00:33:57.672428 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-07 00:33:57.672440 | orchestrator | 2026-03-07 00:33:57.672453 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-07 00:33:57.672464 | orchestrator | Saturday 07 March 2026 00:33:31 +0000 (0:00:00.738) 0:07:55.196 ******** 2026-03-07 00:33:57.672477 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:57.672491 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:57.672504 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:57.672517 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:57.672530 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:57.672543 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:57.672585 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:57.672597 | orchestrator | 2026-03-07 00:33:57.672608 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-07 00:33:57.672619 | orchestrator | Saturday 07 March 2026 00:33:33 +0000 (0:00:01.324) 0:07:56.520 ******** 2026-03-07 00:33:57.672630 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:57.672640 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:57.672651 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:57.672662 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:57.672672 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:57.672683 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:57.672693 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:57.672704 | orchestrator | 2026-03-07 00:33:57.672715 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-07 00:33:57.672726 | orchestrator | Saturday 07 March 2026 00:33:34 +0000 (0:00:01.413) 0:07:57.934 ******** 2026-03-07 00:33:57.672737 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:57.672764 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:57.672780 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:57.672797 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:57.672815 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:57.672834 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:57.672852 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:57.672871 | orchestrator | 2026-03-07 00:33:57.672882 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-07 00:33:57.672893 | orchestrator | Saturday 07 March 2026 00:33:35 +0000 (0:00:00.748) 0:07:58.683 ******** 2026-03-07 00:33:57.672904 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:33:57.672917 | orchestrator | 2026-03-07 00:33:57.672928 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-07 00:33:57.672939 | orchestrator | Saturday 07 March 2026 00:33:36 +0000 (0:00:00.856) 0:07:59.539 ******** 2026-03-07 00:33:57.672951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:33:57.672964 | orchestrator | 2026-03-07 00:33:57.672975 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-07 00:33:57.672986 | orchestrator | Saturday 07 March 2026 00:33:37 +0000 (0:00:00.837) 0:08:00.377 ******** 2026-03-07 00:33:57.672997 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:57.673007 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:57.673018 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:57.673029 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:57.673049 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:57.673060 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:57.673071 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:57.673082 | orchestrator | 2026-03-07 00:33:57.673093 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-07 00:33:57.673123 | orchestrator | Saturday 07 March 2026 00:33:45 +0000 (0:00:08.836) 0:08:09.213 ******** 2026-03-07 00:33:57.673135 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:57.673146 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:57.673156 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:57.673167 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:57.673178 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:57.673188 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:57.673199 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:57.673210 | orchestrator | 2026-03-07 00:33:57.673220 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-07 00:33:57.673231 | orchestrator | Saturday 07 March 2026 00:33:46 +0000 (0:00:00.898) 0:08:10.112 ******** 2026-03-07 00:33:57.673242 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:57.673253 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:57.673264 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:57.673274 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:57.673285 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:57.673295 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:57.673306 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:57.673317 | orchestrator | 2026-03-07 00:33:57.673342 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-07 00:33:57.673363 | orchestrator | Saturday 07 March 2026 00:33:48 +0000 (0:00:01.361) 0:08:11.473 ******** 2026-03-07 00:33:57.673374 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:57.673385 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:57.673395 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:57.673406 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:57.673417 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:57.673428 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:57.673438 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:57.673449 | orchestrator | 2026-03-07 00:33:57.673460 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-07 00:33:57.673471 | orchestrator | Saturday 07 March 2026 00:33:50 +0000 (0:00:01.970) 0:08:13.444 ******** 2026-03-07 00:33:57.673481 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:57.673492 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:57.673503 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:57.673513 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:57.673524 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:57.673535 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:57.673577 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:57.673590 | orchestrator | 2026-03-07 00:33:57.673602 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-07 00:33:57.673613 | orchestrator | Saturday 07 March 2026 00:33:51 +0000 (0:00:01.241) 0:08:14.686 ******** 2026-03-07 00:33:57.673624 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:57.673635 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:57.673645 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:57.673656 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:57.673667 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:57.673677 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:57.673688 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:57.673699 | orchestrator | 2026-03-07 00:33:57.673710 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-07 00:33:57.673721 | orchestrator | 2026-03-07 00:33:57.673731 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-07 00:33:57.673743 | orchestrator | Saturday 07 March 2026 00:33:52 +0000 (0:00:01.122) 0:08:15.808 ******** 2026-03-07 00:33:57.673761 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:33:57.673773 | orchestrator | 2026-03-07 00:33:57.673784 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-07 00:33:57.673794 | orchestrator | Saturday 07 March 2026 00:33:53 +0000 (0:00:00.996) 0:08:16.805 ******** 2026-03-07 00:33:57.673805 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:57.673822 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:57.673833 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:57.673844 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:57.673855 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:57.673865 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:57.673876 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:57.673887 | orchestrator | 2026-03-07 00:33:57.673898 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-07 00:33:57.673909 | orchestrator | Saturday 07 March 2026 00:33:54 +0000 (0:00:00.902) 0:08:17.707 ******** 2026-03-07 00:33:57.673919 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:57.673930 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:57.673941 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:57.673952 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:57.673962 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:57.673973 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:57.673984 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:57.673995 | orchestrator | 2026-03-07 00:33:57.674006 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-07 00:33:57.674132 | orchestrator | Saturday 07 March 2026 00:33:55 +0000 (0:00:01.241) 0:08:18.948 ******** 2026-03-07 00:33:57.674164 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:33:57.674183 | orchestrator | 2026-03-07 00:33:57.674201 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-07 00:33:57.674221 | orchestrator | Saturday 07 March 2026 00:33:56 +0000 (0:00:01.054) 0:08:20.003 ******** 2026-03-07 00:33:57.674242 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:57.674261 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:57.674279 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:57.674290 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:57.674301 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:57.674312 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:57.674323 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:57.674333 | orchestrator | 2026-03-07 00:33:57.674345 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-07 00:33:57.674368 | orchestrator | Saturday 07 March 2026 00:33:57 +0000 (0:00:00.895) 0:08:20.899 ******** 2026-03-07 00:33:59.261117 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:59.261232 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:59.261248 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:59.261259 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:59.261270 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:59.261281 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:59.261292 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:59.261304 | orchestrator | 2026-03-07 00:33:59.261316 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:33:59.261330 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-07 00:33:59.261343 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-07 00:33:59.261354 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-07 00:33:59.261393 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-07 00:33:59.261405 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-07 00:33:59.261416 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-07 00:33:59.261427 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-07 00:33:59.261438 | orchestrator | 2026-03-07 00:33:59.261449 | orchestrator | 2026-03-07 00:33:59.261460 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:33:59.261471 | orchestrator | Saturday 07 March 2026 00:33:58 +0000 (0:00:01.176) 0:08:22.075 ******** 2026-03-07 00:33:59.261481 | orchestrator | =============================================================================== 2026-03-07 00:33:59.261492 | orchestrator | osism.commons.packages : Install required packages --------------------- 83.93s 2026-03-07 00:33:59.261503 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.98s 2026-03-07 00:33:59.261514 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.38s 2026-03-07 00:33:59.261524 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.74s 2026-03-07 00:33:59.261535 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.33s 2026-03-07 00:33:59.261593 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.97s 2026-03-07 00:33:59.261606 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.32s 2026-03-07 00:33:59.261617 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.41s 2026-03-07 00:33:59.261628 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.32s 2026-03-07 00:33:59.261638 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.84s 2026-03-07 00:33:59.261649 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.53s 2026-03-07 00:33:59.261676 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.10s 2026-03-07 00:33:59.261690 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.98s 2026-03-07 00:33:59.261703 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.98s 2026-03-07 00:33:59.261716 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.91s 2026-03-07 00:33:59.261728 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.66s 2026-03-07 00:33:59.261742 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.49s 2026-03-07 00:33:59.261755 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.04s 2026-03-07 00:33:59.261767 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.00s 2026-03-07 00:33:59.261779 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.73s 2026-03-07 00:33:59.624398 | orchestrator | + osism apply fail2ban 2026-03-07 00:34:12.609743 | orchestrator | 2026-03-07 00:34:12 | INFO  | Prepare task for execution of fail2ban. 2026-03-07 00:34:12.721830 | orchestrator | 2026-03-07 00:34:12 | INFO  | Task d3f57c35-6e3c-4d7d-be47-51406e80923d (fail2ban) was prepared for execution. 2026-03-07 00:34:12.721918 | orchestrator | 2026-03-07 00:34:12 | INFO  | It takes a moment until task d3f57c35-6e3c-4d7d-be47-51406e80923d (fail2ban) has been started and output is visible here. 2026-03-07 00:34:35.690134 | orchestrator | 2026-03-07 00:34:35.690263 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-07 00:34:35.690317 | orchestrator | 2026-03-07 00:34:35.690336 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-07 00:34:35.690354 | orchestrator | Saturday 07 March 2026 00:34:17 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-03-07 00:34:35.690371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:34:35.690390 | orchestrator | 2026-03-07 00:34:35.690405 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-07 00:34:35.690420 | orchestrator | Saturday 07 March 2026 00:34:18 +0000 (0:00:01.231) 0:00:01.511 ******** 2026-03-07 00:34:35.690435 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:35.690451 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:35.690466 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:35.690480 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:35.690496 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:35.690666 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:35.690683 | orchestrator | changed: [testbed-manager] 2026-03-07 00:34:35.690699 | orchestrator | 2026-03-07 00:34:35.690714 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-07 00:34:35.690726 | orchestrator | Saturday 07 March 2026 00:34:30 +0000 (0:00:11.774) 0:00:13.286 ******** 2026-03-07 00:34:35.690739 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:35.690752 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:35.690766 | orchestrator | changed: [testbed-manager] 2026-03-07 00:34:35.690779 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:35.690792 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:35.690805 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:35.690818 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:35.690831 | orchestrator | 2026-03-07 00:34:35.690845 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-07 00:34:35.690858 | orchestrator | Saturday 07 March 2026 00:34:32 +0000 (0:00:01.534) 0:00:14.820 ******** 2026-03-07 00:34:35.690870 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:35.690882 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:35.690893 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:35.690903 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:35.690914 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:35.690925 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:35.690935 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:35.690946 | orchestrator | 2026-03-07 00:34:35.690957 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-07 00:34:35.690969 | orchestrator | Saturday 07 March 2026 00:34:33 +0000 (0:00:01.479) 0:00:16.300 ******** 2026-03-07 00:34:35.690981 | orchestrator | changed: [testbed-manager] 2026-03-07 00:34:35.690992 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:35.691004 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:35.691019 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:35.691033 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:35.691048 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:35.691062 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:35.691076 | orchestrator | 2026-03-07 00:34:35.691090 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:34:35.691105 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:34:35.691118 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:34:35.691133 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:34:35.691147 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:34:35.691190 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:34:35.691205 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:34:35.691218 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:34:35.691232 | orchestrator | 2026-03-07 00:34:35.691245 | orchestrator | 2026-03-07 00:34:35.691258 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:34:35.691273 | orchestrator | Saturday 07 March 2026 00:34:35 +0000 (0:00:01.725) 0:00:18.025 ******** 2026-03-07 00:34:35.691287 | orchestrator | =============================================================================== 2026-03-07 00:34:35.691301 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.77s 2026-03-07 00:34:35.691315 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.73s 2026-03-07 00:34:35.691328 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.53s 2026-03-07 00:34:35.691341 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.48s 2026-03-07 00:34:35.691355 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.23s 2026-03-07 00:34:36.070985 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-07 00:34:36.071080 | orchestrator | + osism apply network 2026-03-07 00:34:48.234372 | orchestrator | 2026-03-07 00:34:48 | INFO  | Prepare task for execution of network. 2026-03-07 00:34:48.344096 | orchestrator | 2026-03-07 00:34:48 | INFO  | Task 88e8eefe-6659-48c5-bb8d-32cac6510771 (network) was prepared for execution. 2026-03-07 00:34:48.344205 | orchestrator | 2026-03-07 00:34:48 | INFO  | It takes a moment until task 88e8eefe-6659-48c5-bb8d-32cac6510771 (network) has been started and output is visible here. 2026-03-07 00:35:18.419581 | orchestrator | 2026-03-07 00:35:18.419693 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-07 00:35:18.419708 | orchestrator | 2026-03-07 00:35:18.419716 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-07 00:35:18.419722 | orchestrator | Saturday 07 March 2026 00:34:52 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-03-07 00:35:18.419726 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:18.419733 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:18.419740 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:18.419747 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:18.419754 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:18.419761 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:18.419768 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:18.419775 | orchestrator | 2026-03-07 00:35:18.419782 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-07 00:35:18.419789 | orchestrator | Saturday 07 March 2026 00:34:53 +0000 (0:00:00.784) 0:00:01.055 ******** 2026-03-07 00:35:18.419798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:35:18.419807 | orchestrator | 2026-03-07 00:35:18.419814 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-07 00:35:18.419822 | orchestrator | Saturday 07 March 2026 00:34:54 +0000 (0:00:01.304) 0:00:02.359 ******** 2026-03-07 00:35:18.419828 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:18.419836 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:18.419840 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:18.419844 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:18.419869 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:18.419876 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:18.419883 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:18.419889 | orchestrator | 2026-03-07 00:35:18.419897 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-07 00:35:18.419903 | orchestrator | Saturday 07 March 2026 00:34:57 +0000 (0:00:02.182) 0:00:04.542 ******** 2026-03-07 00:35:18.419910 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:18.419917 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:18.419924 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:18.419931 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:18.419937 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:18.419944 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:18.419951 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:18.419958 | orchestrator | 2026-03-07 00:35:18.419964 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-07 00:35:18.419972 | orchestrator | Saturday 07 March 2026 00:34:59 +0000 (0:00:01.943) 0:00:06.486 ******** 2026-03-07 00:35:18.419979 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-07 00:35:18.419986 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-07 00:35:18.419993 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-07 00:35:18.420000 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-07 00:35:18.420007 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-07 00:35:18.420014 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-07 00:35:18.420020 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-07 00:35:18.420027 | orchestrator | 2026-03-07 00:35:18.420034 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-07 00:35:18.420041 | orchestrator | Saturday 07 March 2026 00:35:00 +0000 (0:00:01.015) 0:00:07.502 ******** 2026-03-07 00:35:18.420047 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:35:18.420055 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-07 00:35:18.420062 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-07 00:35:18.420068 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-07 00:35:18.420075 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 00:35:18.420083 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-07 00:35:18.420090 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 00:35:18.420097 | orchestrator | 2026-03-07 00:35:18.420104 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-07 00:35:18.420111 | orchestrator | Saturday 07 March 2026 00:35:03 +0000 (0:00:03.776) 0:00:11.278 ******** 2026-03-07 00:35:18.420118 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:18.420125 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:18.420132 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:18.420139 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:18.420157 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:18.420164 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:18.420171 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:18.420178 | orchestrator | 2026-03-07 00:35:18.420193 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-07 00:35:18.420200 | orchestrator | Saturday 07 March 2026 00:35:05 +0000 (0:00:01.670) 0:00:12.949 ******** 2026-03-07 00:35:18.420206 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:35:18.420213 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 00:35:18.420220 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-07 00:35:18.420227 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-07 00:35:18.420250 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-07 00:35:18.420257 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 00:35:18.420264 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-07 00:35:18.420271 | orchestrator | 2026-03-07 00:35:18.420278 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-07 00:35:18.420292 | orchestrator | Saturday 07 March 2026 00:35:07 +0000 (0:00:01.883) 0:00:14.832 ******** 2026-03-07 00:35:18.420298 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:18.420305 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:18.420312 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:18.420319 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:18.420325 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:18.420332 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:18.420338 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:18.420345 | orchestrator | 2026-03-07 00:35:18.420353 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-07 00:35:18.420376 | orchestrator | Saturday 07 March 2026 00:35:08 +0000 (0:00:01.044) 0:00:15.877 ******** 2026-03-07 00:35:18.420383 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:35:18.420389 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:35:18.420396 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:35:18.420403 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:35:18.420410 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:35:18.420416 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:35:18.420423 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:35:18.420430 | orchestrator | 2026-03-07 00:35:18.420450 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-07 00:35:18.420458 | orchestrator | Saturday 07 March 2026 00:35:09 +0000 (0:00:00.586) 0:00:16.464 ******** 2026-03-07 00:35:18.420465 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:18.420472 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:18.420478 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:18.420485 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:18.420492 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:18.420499 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:18.420505 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:18.420512 | orchestrator | 2026-03-07 00:35:18.420519 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-07 00:35:18.420526 | orchestrator | Saturday 07 March 2026 00:35:11 +0000 (0:00:02.154) 0:00:18.618 ******** 2026-03-07 00:35:18.420532 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:35:18.420539 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:35:18.420545 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:35:18.420552 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:35:18.420559 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:35:18.420566 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:35:18.420574 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-07 00:35:18.420582 | orchestrator | 2026-03-07 00:35:18.420589 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-07 00:35:18.420596 | orchestrator | Saturday 07 March 2026 00:35:12 +0000 (0:00:00.967) 0:00:19.586 ******** 2026-03-07 00:35:18.420602 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:18.420609 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:18.420616 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:18.420623 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:18.420630 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:18.420637 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:18.420644 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:18.420651 | orchestrator | 2026-03-07 00:35:18.420658 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-07 00:35:18.420664 | orchestrator | Saturday 07 March 2026 00:35:13 +0000 (0:00:01.765) 0:00:21.351 ******** 2026-03-07 00:35:18.420672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:35:18.420681 | orchestrator | 2026-03-07 00:35:18.420687 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-07 00:35:18.420700 | orchestrator | Saturday 07 March 2026 00:35:15 +0000 (0:00:01.301) 0:00:22.652 ******** 2026-03-07 00:35:18.420707 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:18.420714 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:18.420721 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:18.420728 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:18.420734 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:18.420741 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:18.420748 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:18.420754 | orchestrator | 2026-03-07 00:35:18.420760 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-07 00:35:18.420767 | orchestrator | Saturday 07 March 2026 00:35:16 +0000 (0:00:00.987) 0:00:23.640 ******** 2026-03-07 00:35:18.420773 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:18.420784 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:18.420790 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:18.420796 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:18.420802 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:18.420808 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:18.420814 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:18.420820 | orchestrator | 2026-03-07 00:35:18.420827 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-07 00:35:18.420834 | orchestrator | Saturday 07 March 2026 00:35:17 +0000 (0:00:00.858) 0:00:24.498 ******** 2026-03-07 00:35:18.420841 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:35:18.420849 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:35:18.420855 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:35:18.420862 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:35:18.420869 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:35:18.420875 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:35:18.420882 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:35:18.420889 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:35:18.420896 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:35:18.420903 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:35:18.420909 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:35:18.420916 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:35:18.420923 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:35:18.420929 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:35:18.420935 | orchestrator | 2026-03-07 00:35:18.420948 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-07 00:35:35.384981 | orchestrator | Saturday 07 March 2026 00:35:18 +0000 (0:00:01.295) 0:00:25.794 ******** 2026-03-07 00:35:35.385074 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:35:35.385085 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:35:35.385092 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:35:35.385099 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:35:35.385106 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:35:35.385112 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:35:35.385118 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:35:35.385125 | orchestrator | 2026-03-07 00:35:35.385132 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-07 00:35:35.385139 | orchestrator | Saturday 07 March 2026 00:35:19 +0000 (0:00:00.709) 0:00:26.503 ******** 2026-03-07 00:35:35.385147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-node-0, testbed-manager, testbed-node-1, testbed-node-3, testbed-node-5, testbed-node-4 2026-03-07 00:35:35.385174 | orchestrator | 2026-03-07 00:35:35.385182 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-07 00:35:35.385188 | orchestrator | Saturday 07 March 2026 00:35:23 +0000 (0:00:04.693) 0:00:31.196 ******** 2026-03-07 00:35:35.385196 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385210 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385232 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385256 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385325 | orchestrator | 2026-03-07 00:35:35.385331 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-07 00:35:35.385338 | orchestrator | Saturday 07 March 2026 00:35:29 +0000 (0:00:05.832) 0:00:37.028 ******** 2026-03-07 00:35:35.385344 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385386 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:35:35.385399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385451 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:35.385483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:49.879533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:35:49.879681 | orchestrator | 2026-03-07 00:35:49.879712 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-07 00:35:49.879735 | orchestrator | Saturday 07 March 2026 00:35:35 +0000 (0:00:06.069) 0:00:43.098 ******** 2026-03-07 00:35:49.879757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:35:49.879776 | orchestrator | 2026-03-07 00:35:49.879795 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-07 00:35:49.879814 | orchestrator | Saturday 07 March 2026 00:35:37 +0000 (0:00:01.519) 0:00:44.618 ******** 2026-03-07 00:35:49.879835 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:49.879856 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:49.879875 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:49.879894 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:49.879912 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:49.879931 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:49.879951 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:49.879971 | orchestrator | 2026-03-07 00:35:49.879992 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-07 00:35:49.880012 | orchestrator | Saturday 07 March 2026 00:35:38 +0000 (0:00:01.204) 0:00:45.823 ******** 2026-03-07 00:35:49.880032 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:35:49.880054 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:35:49.880074 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:35:49.880094 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:35:49.880115 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:35:49.880136 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:35:49.880157 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:35:49.880178 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:35:49.880200 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:35:49.880220 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:35:49.880239 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:35:49.880259 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:35:49.880278 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:35:49.880299 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:35:49.880342 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:35:49.880365 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:35:49.880385 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:35:49.880471 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:35:49.880491 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:35:49.880509 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:35:49.880529 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:35:49.880547 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:35:49.880565 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:35:49.880576 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:35:49.880587 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:35:49.880598 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:35:49.880609 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:35:49.880620 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:35:49.880630 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:35:49.880641 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:35:49.880652 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:35:49.880663 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:35:49.880674 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:35:49.880684 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:35:49.880695 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:35:49.880706 | orchestrator | 2026-03-07 00:35:49.880717 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-07 00:35:49.880759 | orchestrator | Saturday 07 March 2026 00:35:39 +0000 (0:00:01.028) 0:00:46.852 ******** 2026-03-07 00:35:49.880779 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:35:49.880797 | orchestrator | 2026-03-07 00:35:49.880817 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-07 00:35:49.880836 | orchestrator | Saturday 07 March 2026 00:35:40 +0000 (0:00:01.349) 0:00:48.201 ******** 2026-03-07 00:35:49.880854 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:35:49.880871 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:35:49.880883 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:35:49.880893 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:35:49.880904 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:35:49.880915 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:35:49.880926 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:35:49.880936 | orchestrator | 2026-03-07 00:35:49.880947 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-07 00:35:49.880957 | orchestrator | Saturday 07 March 2026 00:35:41 +0000 (0:00:00.713) 0:00:48.915 ******** 2026-03-07 00:35:49.880967 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:35:49.880976 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:35:49.880986 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:35:49.880995 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:35:49.881005 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:35:49.881014 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:35:49.881024 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:35:49.881033 | orchestrator | 2026-03-07 00:35:49.881043 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-07 00:35:49.881052 | orchestrator | Saturday 07 March 2026 00:35:42 +0000 (0:00:00.922) 0:00:49.837 ******** 2026-03-07 00:35:49.881062 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:35:49.881081 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:35:49.881091 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:35:49.881100 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:35:49.881110 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:35:49.881119 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:35:49.881129 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:35:49.881138 | orchestrator | 2026-03-07 00:35:49.881148 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-07 00:35:49.881158 | orchestrator | Saturday 07 March 2026 00:35:43 +0000 (0:00:00.737) 0:00:50.575 ******** 2026-03-07 00:35:49.881167 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:49.881177 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:49.881187 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:49.881196 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:49.881206 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:49.881215 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:49.881225 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:49.881235 | orchestrator | 2026-03-07 00:35:49.881244 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-07 00:35:49.881254 | orchestrator | Saturday 07 March 2026 00:35:45 +0000 (0:00:01.854) 0:00:52.429 ******** 2026-03-07 00:35:49.881264 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:49.881273 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:49.881283 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:49.881292 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:49.881301 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:49.881311 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:49.881320 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:49.881330 | orchestrator | 2026-03-07 00:35:49.881339 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-07 00:35:49.881357 | orchestrator | Saturday 07 March 2026 00:35:46 +0000 (0:00:00.984) 0:00:53.414 ******** 2026-03-07 00:35:49.881367 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:49.881376 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:49.881386 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:49.881416 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:49.881426 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:49.881436 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:49.881446 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:49.881463 | orchestrator | 2026-03-07 00:35:49.881479 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-07 00:35:49.881494 | orchestrator | Saturday 07 March 2026 00:35:48 +0000 (0:00:02.383) 0:00:55.798 ******** 2026-03-07 00:35:49.881510 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:35:49.881526 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:35:49.881542 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:35:49.881559 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:35:49.881575 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:35:49.881592 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:35:49.881608 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:35:49.881626 | orchestrator | 2026-03-07 00:35:49.881642 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-07 00:35:49.881659 | orchestrator | Saturday 07 March 2026 00:35:49 +0000 (0:00:00.872) 0:00:56.670 ******** 2026-03-07 00:35:49.881676 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:35:49.881692 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:35:49.881709 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:35:49.881725 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:35:49.881742 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:35:49.881759 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:35:49.881776 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:35:49.881793 | orchestrator | 2026-03-07 00:35:49.881810 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:35:49.881821 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-07 00:35:49.881842 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-07 00:35:49.881862 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-07 00:35:50.252631 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-07 00:35:50.252725 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-07 00:35:50.252735 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-07 00:35:50.252745 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-07 00:35:50.252753 | orchestrator | 2026-03-07 00:35:50.252763 | orchestrator | 2026-03-07 00:35:50.252772 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:35:50.252782 | orchestrator | Saturday 07 March 2026 00:35:49 +0000 (0:00:00.584) 0:00:57.255 ******** 2026-03-07 00:35:50.252791 | orchestrator | =============================================================================== 2026-03-07 00:35:50.252800 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.07s 2026-03-07 00:35:50.252808 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.83s 2026-03-07 00:35:50.252817 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.69s 2026-03-07 00:35:50.252825 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.78s 2026-03-07 00:35:50.252834 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.38s 2026-03-07 00:35:50.252842 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.18s 2026-03-07 00:35:50.252851 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.15s 2026-03-07 00:35:50.252859 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.94s 2026-03-07 00:35:50.252868 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.88s 2026-03-07 00:35:50.252876 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.85s 2026-03-07 00:35:50.252885 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.77s 2026-03-07 00:35:50.252893 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.67s 2026-03-07 00:35:50.252902 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.52s 2026-03-07 00:35:50.252910 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.35s 2026-03-07 00:35:50.252919 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.30s 2026-03-07 00:35:50.252927 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.30s 2026-03-07 00:35:50.252936 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.30s 2026-03-07 00:35:50.252944 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.20s 2026-03-07 00:35:50.252953 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.04s 2026-03-07 00:35:50.252962 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.03s 2026-03-07 00:35:50.599388 | orchestrator | + osism apply wireguard 2026-03-07 00:36:02.798672 | orchestrator | 2026-03-07 00:36:02 | INFO  | Prepare task for execution of wireguard. 2026-03-07 00:36:02.869099 | orchestrator | 2026-03-07 00:36:02 | INFO  | Task 9b1bda04-1995-4218-9541-b0e9ab149035 (wireguard) was prepared for execution. 2026-03-07 00:36:02.869231 | orchestrator | 2026-03-07 00:36:02 | INFO  | It takes a moment until task 9b1bda04-1995-4218-9541-b0e9ab149035 (wireguard) has been started and output is visible here. 2026-03-07 00:36:24.121309 | orchestrator | 2026-03-07 00:36:24.121457 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-07 00:36:24.121479 | orchestrator | 2026-03-07 00:36:24.121498 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-07 00:36:24.121511 | orchestrator | Saturday 07 March 2026 00:36:07 +0000 (0:00:00.228) 0:00:00.228 ******** 2026-03-07 00:36:24.121526 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:24.121546 | orchestrator | 2026-03-07 00:36:24.121557 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-07 00:36:24.121569 | orchestrator | Saturday 07 March 2026 00:36:08 +0000 (0:00:01.628) 0:00:01.857 ******** 2026-03-07 00:36:24.121579 | orchestrator | changed: [testbed-manager] 2026-03-07 00:36:24.121591 | orchestrator | 2026-03-07 00:36:24.121602 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-07 00:36:24.121613 | orchestrator | Saturday 07 March 2026 00:36:15 +0000 (0:00:06.964) 0:00:08.822 ******** 2026-03-07 00:36:24.121624 | orchestrator | changed: [testbed-manager] 2026-03-07 00:36:24.121635 | orchestrator | 2026-03-07 00:36:24.121646 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-07 00:36:24.121656 | orchestrator | Saturday 07 March 2026 00:36:16 +0000 (0:00:00.586) 0:00:09.409 ******** 2026-03-07 00:36:24.121667 | orchestrator | changed: [testbed-manager] 2026-03-07 00:36:24.121678 | orchestrator | 2026-03-07 00:36:24.121688 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-07 00:36:24.121699 | orchestrator | Saturday 07 March 2026 00:36:17 +0000 (0:00:00.512) 0:00:09.921 ******** 2026-03-07 00:36:24.121713 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:24.121732 | orchestrator | 2026-03-07 00:36:24.121744 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-07 00:36:24.121755 | orchestrator | Saturday 07 March 2026 00:36:17 +0000 (0:00:00.737) 0:00:10.659 ******** 2026-03-07 00:36:24.121765 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:24.121776 | orchestrator | 2026-03-07 00:36:24.121787 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-07 00:36:24.121797 | orchestrator | Saturday 07 March 2026 00:36:18 +0000 (0:00:00.500) 0:00:11.159 ******** 2026-03-07 00:36:24.121808 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:24.121819 | orchestrator | 2026-03-07 00:36:24.121830 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-07 00:36:24.121841 | orchestrator | Saturday 07 March 2026 00:36:18 +0000 (0:00:00.458) 0:00:11.618 ******** 2026-03-07 00:36:24.121856 | orchestrator | changed: [testbed-manager] 2026-03-07 00:36:24.121876 | orchestrator | 2026-03-07 00:36:24.121892 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-07 00:36:24.121905 | orchestrator | Saturday 07 March 2026 00:36:19 +0000 (0:00:01.234) 0:00:12.853 ******** 2026-03-07 00:36:24.121918 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:36:24.121931 | orchestrator | changed: [testbed-manager] 2026-03-07 00:36:24.121944 | orchestrator | 2026-03-07 00:36:24.121956 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-07 00:36:24.121969 | orchestrator | Saturday 07 March 2026 00:36:20 +0000 (0:00:01.003) 0:00:13.856 ******** 2026-03-07 00:36:24.121988 | orchestrator | changed: [testbed-manager] 2026-03-07 00:36:24.122001 | orchestrator | 2026-03-07 00:36:24.122014 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-07 00:36:24.122083 | orchestrator | Saturday 07 March 2026 00:36:22 +0000 (0:00:01.728) 0:00:15.585 ******** 2026-03-07 00:36:24.122096 | orchestrator | changed: [testbed-manager] 2026-03-07 00:36:24.122108 | orchestrator | 2026-03-07 00:36:24.122121 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:36:24.122178 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:36:24.122194 | orchestrator | 2026-03-07 00:36:24.122207 | orchestrator | 2026-03-07 00:36:24.122218 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:36:24.122229 | orchestrator | Saturday 07 March 2026 00:36:23 +0000 (0:00:01.076) 0:00:16.661 ******** 2026-03-07 00:36:24.122240 | orchestrator | =============================================================================== 2026-03-07 00:36:24.122250 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.96s 2026-03-07 00:36:24.122267 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2026-03-07 00:36:24.122285 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.63s 2026-03-07 00:36:24.122297 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.23s 2026-03-07 00:36:24.122307 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.08s 2026-03-07 00:36:24.122318 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.00s 2026-03-07 00:36:24.122328 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.74s 2026-03-07 00:36:24.122339 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.59s 2026-03-07 00:36:24.122425 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.51s 2026-03-07 00:36:24.122442 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.50s 2026-03-07 00:36:24.122453 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2026-03-07 00:36:24.470827 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-07 00:36:24.511001 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-07 00:36:24.511132 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-07 00:36:24.591972 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 184 0 --:--:-- --:--:-- --:--:-- 185 2026-03-07 00:36:24.601838 | orchestrator | + osism apply --environment custom workarounds 2026-03-07 00:36:26.703814 | orchestrator | 2026-03-07 00:36:26 | INFO  | Trying to run play workarounds in environment custom 2026-03-07 00:36:36.729046 | orchestrator | 2026-03-07 00:36:36 | INFO  | Prepare task for execution of workarounds. 2026-03-07 00:36:36.812728 | orchestrator | 2026-03-07 00:36:36 | INFO  | Task beb635bb-c520-49ad-a519-2ad7d0b641b1 (workarounds) was prepared for execution. 2026-03-07 00:36:36.812827 | orchestrator | 2026-03-07 00:36:36 | INFO  | It takes a moment until task beb635bb-c520-49ad-a519-2ad7d0b641b1 (workarounds) has been started and output is visible here. 2026-03-07 00:37:03.834442 | orchestrator | 2026-03-07 00:37:03.834571 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:37:03.834589 | orchestrator | 2026-03-07 00:37:03.834601 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-07 00:37:03.834613 | orchestrator | Saturday 07 March 2026 00:36:41 +0000 (0:00:00.134) 0:00:00.134 ******** 2026-03-07 00:37:03.834625 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-07 00:37:03.834637 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-07 00:37:03.834648 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-07 00:37:03.834659 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-07 00:37:03.834670 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-07 00:37:03.834681 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-07 00:37:03.834697 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-07 00:37:03.834751 | orchestrator | 2026-03-07 00:37:03.834780 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-07 00:37:03.834798 | orchestrator | 2026-03-07 00:37:03.834817 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-07 00:37:03.834836 | orchestrator | Saturday 07 March 2026 00:36:42 +0000 (0:00:00.867) 0:00:01.001 ******** 2026-03-07 00:37:03.834856 | orchestrator | ok: [testbed-manager] 2026-03-07 00:37:03.834875 | orchestrator | 2026-03-07 00:37:03.834895 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-07 00:37:03.834913 | orchestrator | 2026-03-07 00:37:03.834932 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-07 00:37:03.834953 | orchestrator | Saturday 07 March 2026 00:36:44 +0000 (0:00:02.695) 0:00:03.697 ******** 2026-03-07 00:37:03.834973 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:37:03.834994 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:37:03.835014 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:37:03.835035 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:37:03.835054 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:37:03.835074 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:37:03.835094 | orchestrator | 2026-03-07 00:37:03.835115 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-07 00:37:03.835136 | orchestrator | 2026-03-07 00:37:03.835157 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-07 00:37:03.835177 | orchestrator | Saturday 07 March 2026 00:36:46 +0000 (0:00:01.704) 0:00:05.402 ******** 2026-03-07 00:37:03.835197 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:03.835220 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:03.835240 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:03.835259 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:03.835279 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:03.835335 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:03.835354 | orchestrator | 2026-03-07 00:37:03.835371 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-07 00:37:03.835391 | orchestrator | Saturday 07 March 2026 00:36:48 +0000 (0:00:01.529) 0:00:06.931 ******** 2026-03-07 00:37:03.835409 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:37:03.835428 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:37:03.835445 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:37:03.835465 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:37:03.835483 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:37:03.835502 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:37:03.835521 | orchestrator | 2026-03-07 00:37:03.835541 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-07 00:37:03.835559 | orchestrator | Saturday 07 March 2026 00:36:51 +0000 (0:00:03.720) 0:00:10.652 ******** 2026-03-07 00:37:03.835577 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:37:03.835615 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:37:03.835635 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:37:03.835653 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:37:03.835670 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:37:03.835690 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:37:03.835708 | orchestrator | 2026-03-07 00:37:03.835727 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-07 00:37:03.835746 | orchestrator | 2026-03-07 00:37:03.835765 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-07 00:37:03.835802 | orchestrator | Saturday 07 March 2026 00:36:52 +0000 (0:00:00.739) 0:00:11.392 ******** 2026-03-07 00:37:03.835821 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:37:03.835839 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:37:03.835858 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:37:03.835877 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:37:03.835894 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:37:03.835914 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:37:03.835933 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:03.835952 | orchestrator | 2026-03-07 00:37:03.835972 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-07 00:37:03.835992 | orchestrator | Saturday 07 March 2026 00:36:54 +0000 (0:00:01.678) 0:00:13.071 ******** 2026-03-07 00:37:03.836011 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:37:03.836028 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:37:03.836046 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:37:03.836064 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:37:03.836082 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:37:03.836102 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:37:03.836145 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:03.836158 | orchestrator | 2026-03-07 00:37:03.836169 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-07 00:37:03.836180 | orchestrator | Saturday 07 March 2026 00:36:55 +0000 (0:00:01.696) 0:00:14.767 ******** 2026-03-07 00:37:03.836191 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:37:03.836202 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:37:03.836213 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:37:03.836224 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:37:03.836234 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:37:03.836245 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:37:03.836255 | orchestrator | ok: [testbed-manager] 2026-03-07 00:37:03.836266 | orchestrator | 2026-03-07 00:37:03.836277 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-07 00:37:03.836317 | orchestrator | Saturday 07 March 2026 00:36:57 +0000 (0:00:01.633) 0:00:16.400 ******** 2026-03-07 00:37:03.836328 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:37:03.836339 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:37:03.836350 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:37:03.836361 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:37:03.836372 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:37:03.836382 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:37:03.836393 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:03.836404 | orchestrator | 2026-03-07 00:37:03.836415 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-07 00:37:03.836426 | orchestrator | Saturday 07 March 2026 00:37:00 +0000 (0:00:02.850) 0:00:19.251 ******** 2026-03-07 00:37:03.836436 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:37:03.836447 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:37:03.836458 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:37:03.836469 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:37:03.836479 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:37:03.836490 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:37:03.836500 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:37:03.836511 | orchestrator | 2026-03-07 00:37:03.836522 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-07 00:37:03.836533 | orchestrator | 2026-03-07 00:37:03.836544 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-07 00:37:03.836555 | orchestrator | Saturday 07 March 2026 00:37:00 +0000 (0:00:00.658) 0:00:19.909 ******** 2026-03-07 00:37:03.836566 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:37:03.836577 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:37:03.836587 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:37:03.836598 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:37:03.836609 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:37:03.836630 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:37:03.836641 | orchestrator | ok: [testbed-manager] 2026-03-07 00:37:03.836651 | orchestrator | 2026-03-07 00:37:03.836662 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:37:03.836675 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:37:03.836688 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:03.836699 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:03.836710 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:03.836721 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:03.836732 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:03.836743 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:03.836754 | orchestrator | 2026-03-07 00:37:03.836765 | orchestrator | 2026-03-07 00:37:03.836784 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:37:03.836795 | orchestrator | Saturday 07 March 2026 00:37:03 +0000 (0:00:02.812) 0:00:22.722 ******** 2026-03-07 00:37:03.836806 | orchestrator | =============================================================================== 2026-03-07 00:37:03.836817 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.72s 2026-03-07 00:37:03.836828 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.85s 2026-03-07 00:37:03.836838 | orchestrator | Install python3-docker -------------------------------------------------- 2.81s 2026-03-07 00:37:03.836849 | orchestrator | Apply netplan configuration --------------------------------------------- 2.70s 2026-03-07 00:37:03.836860 | orchestrator | Apply netplan configuration --------------------------------------------- 1.70s 2026-03-07 00:37:03.836871 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.70s 2026-03-07 00:37:03.836882 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.68s 2026-03-07 00:37:03.836892 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.63s 2026-03-07 00:37:03.836903 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2026-03-07 00:37:03.836914 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.87s 2026-03-07 00:37:03.836925 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.74s 2026-03-07 00:37:03.836943 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-03-07 00:37:04.455136 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-07 00:37:16.503817 | orchestrator | 2026-03-07 00:37:16 | INFO  | Prepare task for execution of reboot. 2026-03-07 00:37:16.594892 | orchestrator | 2026-03-07 00:37:16 | INFO  | Task 9249f5e8-cf59-4e01-b0d9-475537c9bfc7 (reboot) was prepared for execution. 2026-03-07 00:37:16.594996 | orchestrator | 2026-03-07 00:37:16 | INFO  | It takes a moment until task 9249f5e8-cf59-4e01-b0d9-475537c9bfc7 (reboot) has been started and output is visible here. 2026-03-07 00:37:27.297377 | orchestrator | 2026-03-07 00:37:27.297483 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:37:27.297496 | orchestrator | 2026-03-07 00:37:27.297505 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:37:27.297541 | orchestrator | Saturday 07 March 2026 00:37:21 +0000 (0:00:00.242) 0:00:00.242 ******** 2026-03-07 00:37:27.297556 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:37:27.297571 | orchestrator | 2026-03-07 00:37:27.297584 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:37:27.297598 | orchestrator | Saturday 07 March 2026 00:37:21 +0000 (0:00:00.118) 0:00:00.361 ******** 2026-03-07 00:37:27.297612 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:37:27.297625 | orchestrator | 2026-03-07 00:37:27.297639 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:37:27.297654 | orchestrator | Saturday 07 March 2026 00:37:22 +0000 (0:00:01.025) 0:00:01.387 ******** 2026-03-07 00:37:27.297667 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:37:27.297680 | orchestrator | 2026-03-07 00:37:27.297694 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:37:27.297709 | orchestrator | 2026-03-07 00:37:27.297724 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:37:27.297738 | orchestrator | Saturday 07 March 2026 00:37:22 +0000 (0:00:00.111) 0:00:01.499 ******** 2026-03-07 00:37:27.297750 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:37:27.297757 | orchestrator | 2026-03-07 00:37:27.297766 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:37:27.297778 | orchestrator | Saturday 07 March 2026 00:37:22 +0000 (0:00:00.118) 0:00:01.617 ******** 2026-03-07 00:37:27.297792 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:37:27.297805 | orchestrator | 2026-03-07 00:37:27.297819 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:37:27.297833 | orchestrator | Saturday 07 March 2026 00:37:23 +0000 (0:00:00.666) 0:00:02.284 ******** 2026-03-07 00:37:27.297848 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:37:27.297860 | orchestrator | 2026-03-07 00:37:27.297874 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:37:27.297884 | orchestrator | 2026-03-07 00:37:27.297893 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:37:27.297902 | orchestrator | Saturday 07 March 2026 00:37:23 +0000 (0:00:00.131) 0:00:02.416 ******** 2026-03-07 00:37:27.297911 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:37:27.297920 | orchestrator | 2026-03-07 00:37:27.297930 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:37:27.297939 | orchestrator | Saturday 07 March 2026 00:37:23 +0000 (0:00:00.247) 0:00:02.663 ******** 2026-03-07 00:37:27.297947 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:37:27.297956 | orchestrator | 2026-03-07 00:37:27.297965 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:37:27.297974 | orchestrator | Saturday 07 March 2026 00:37:24 +0000 (0:00:00.627) 0:00:03.291 ******** 2026-03-07 00:37:27.297983 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:37:27.297991 | orchestrator | 2026-03-07 00:37:27.298000 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:37:27.298009 | orchestrator | 2026-03-07 00:37:27.298075 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:37:27.298085 | orchestrator | Saturday 07 March 2026 00:37:24 +0000 (0:00:00.134) 0:00:03.426 ******** 2026-03-07 00:37:27.298094 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:37:27.298103 | orchestrator | 2026-03-07 00:37:27.298112 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:37:27.298134 | orchestrator | Saturday 07 March 2026 00:37:24 +0000 (0:00:00.115) 0:00:03.541 ******** 2026-03-07 00:37:27.298143 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:37:27.298152 | orchestrator | 2026-03-07 00:37:27.298161 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:37:27.298171 | orchestrator | Saturday 07 March 2026 00:37:25 +0000 (0:00:00.643) 0:00:04.185 ******** 2026-03-07 00:37:27.298180 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:37:27.298197 | orchestrator | 2026-03-07 00:37:27.298207 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:37:27.298217 | orchestrator | 2026-03-07 00:37:27.298227 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:37:27.298235 | orchestrator | Saturday 07 March 2026 00:37:25 +0000 (0:00:00.128) 0:00:04.314 ******** 2026-03-07 00:37:27.298268 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:37:27.298281 | orchestrator | 2026-03-07 00:37:27.298290 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:37:27.298298 | orchestrator | Saturday 07 March 2026 00:37:25 +0000 (0:00:00.134) 0:00:04.449 ******** 2026-03-07 00:37:27.298306 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:37:27.298314 | orchestrator | 2026-03-07 00:37:27.298321 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:37:27.298329 | orchestrator | Saturday 07 March 2026 00:37:25 +0000 (0:00:00.621) 0:00:05.070 ******** 2026-03-07 00:37:27.298337 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:37:27.298345 | orchestrator | 2026-03-07 00:37:27.298353 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:37:27.298361 | orchestrator | 2026-03-07 00:37:27.298369 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:37:27.298377 | orchestrator | Saturday 07 March 2026 00:37:26 +0000 (0:00:00.129) 0:00:05.199 ******** 2026-03-07 00:37:27.298385 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:37:27.298392 | orchestrator | 2026-03-07 00:37:27.298400 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:37:27.298408 | orchestrator | Saturday 07 March 2026 00:37:26 +0000 (0:00:00.129) 0:00:05.328 ******** 2026-03-07 00:37:27.298416 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:37:27.298424 | orchestrator | 2026-03-07 00:37:27.298432 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:37:27.298440 | orchestrator | Saturday 07 March 2026 00:37:26 +0000 (0:00:00.672) 0:00:06.001 ******** 2026-03-07 00:37:27.298465 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:37:27.298474 | orchestrator | 2026-03-07 00:37:27.298482 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:37:27.298491 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:27.298514 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:27.298531 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:27.298539 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:27.298547 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:27.298555 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:27.298563 | orchestrator | 2026-03-07 00:37:27.298571 | orchestrator | 2026-03-07 00:37:27.298579 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:37:27.298587 | orchestrator | Saturday 07 March 2026 00:37:26 +0000 (0:00:00.038) 0:00:06.039 ******** 2026-03-07 00:37:27.298595 | orchestrator | =============================================================================== 2026-03-07 00:37:27.298603 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.26s 2026-03-07 00:37:27.298611 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.86s 2026-03-07 00:37:27.298625 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2026-03-07 00:37:27.692921 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-07 00:37:39.901415 | orchestrator | 2026-03-07 00:37:39 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-07 00:37:39.986525 | orchestrator | 2026-03-07 00:37:39 | INFO  | Task 2c808cd2-17fe-442c-b8df-467801e05268 (wait-for-connection) was prepared for execution. 2026-03-07 00:37:39.986638 | orchestrator | 2026-03-07 00:37:39 | INFO  | It takes a moment until task 2c808cd2-17fe-442c-b8df-467801e05268 (wait-for-connection) has been started and output is visible here. 2026-03-07 00:37:56.556089 | orchestrator | 2026-03-07 00:37:56.556318 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-07 00:37:56.556382 | orchestrator | 2026-03-07 00:37:56.556402 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-07 00:37:56.556420 | orchestrator | Saturday 07 March 2026 00:37:44 +0000 (0:00:00.327) 0:00:00.327 ******** 2026-03-07 00:37:56.556439 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:37:56.556459 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:37:56.556477 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:37:56.556496 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:37:56.556515 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:37:56.556556 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:37:56.556577 | orchestrator | 2026-03-07 00:37:56.556596 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:37:56.556619 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:37:56.556641 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:37:56.556661 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:37:56.556680 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:37:56.556699 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:37:56.556718 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:37:56.556737 | orchestrator | 2026-03-07 00:37:56.556758 | orchestrator | 2026-03-07 00:37:56.556778 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:37:56.556797 | orchestrator | Saturday 07 March 2026 00:37:56 +0000 (0:00:11.478) 0:00:11.805 ******** 2026-03-07 00:37:56.556817 | orchestrator | =============================================================================== 2026-03-07 00:37:56.556836 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.48s 2026-03-07 00:37:56.910976 | orchestrator | + osism apply hddtemp 2026-03-07 00:38:09.144290 | orchestrator | 2026-03-07 00:38:09 | INFO  | Prepare task for execution of hddtemp. 2026-03-07 00:38:09.233714 | orchestrator | 2026-03-07 00:38:09 | INFO  | Task 804a6c89-4f94-4bd0-bcf5-e7dcda64d0f2 (hddtemp) was prepared for execution. 2026-03-07 00:38:09.233829 | orchestrator | 2026-03-07 00:38:09 | INFO  | It takes a moment until task 804a6c89-4f94-4bd0-bcf5-e7dcda64d0f2 (hddtemp) has been started and output is visible here. 2026-03-07 00:38:38.061417 | orchestrator | 2026-03-07 00:38:38.061517 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-07 00:38:38.061529 | orchestrator | 2026-03-07 00:38:38.061536 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-07 00:38:38.061543 | orchestrator | Saturday 07 March 2026 00:38:13 +0000 (0:00:00.287) 0:00:00.287 ******** 2026-03-07 00:38:38.061569 | orchestrator | ok: [testbed-manager] 2026-03-07 00:38:38.061577 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:38:38.061584 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:38:38.061590 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:38:38.061597 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:38:38.061603 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:38:38.061610 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:38:38.061616 | orchestrator | 2026-03-07 00:38:38.061622 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-07 00:38:38.061629 | orchestrator | Saturday 07 March 2026 00:38:14 +0000 (0:00:00.804) 0:00:01.092 ******** 2026-03-07 00:38:38.061637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:38:38.061645 | orchestrator | 2026-03-07 00:38:38.061652 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-07 00:38:38.061658 | orchestrator | Saturday 07 March 2026 00:38:16 +0000 (0:00:01.309) 0:00:02.401 ******** 2026-03-07 00:38:38.061664 | orchestrator | ok: [testbed-manager] 2026-03-07 00:38:38.061671 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:38:38.061677 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:38:38.061683 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:38:38.061689 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:38:38.061695 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:38:38.061701 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:38:38.061707 | orchestrator | 2026-03-07 00:38:38.061713 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-07 00:38:38.061720 | orchestrator | Saturday 07 March 2026 00:38:17 +0000 (0:00:01.747) 0:00:04.149 ******** 2026-03-07 00:38:38.061726 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:38:38.061734 | orchestrator | changed: [testbed-manager] 2026-03-07 00:38:38.061740 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:38:38.061746 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:38:38.061752 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:38:38.061758 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:38:38.061764 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:38:38.061770 | orchestrator | 2026-03-07 00:38:38.061776 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-07 00:38:38.061783 | orchestrator | Saturday 07 March 2026 00:38:18 +0000 (0:00:01.152) 0:00:05.302 ******** 2026-03-07 00:38:38.061789 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:38:38.061795 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:38:38.061801 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:38:38.061807 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:38:38.061813 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:38:38.061819 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:38:38.061825 | orchestrator | ok: [testbed-manager] 2026-03-07 00:38:38.061832 | orchestrator | 2026-03-07 00:38:38.061838 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-07 00:38:38.061844 | orchestrator | Saturday 07 March 2026 00:38:20 +0000 (0:00:01.236) 0:00:06.539 ******** 2026-03-07 00:38:38.061850 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:38:38.061857 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:38:38.061863 | orchestrator | changed: [testbed-manager] 2026-03-07 00:38:38.061879 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:38:38.061886 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:38:38.061892 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:38:38.061898 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:38:38.061904 | orchestrator | 2026-03-07 00:38:38.061911 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-07 00:38:38.061917 | orchestrator | Saturday 07 March 2026 00:38:21 +0000 (0:00:00.917) 0:00:07.456 ******** 2026-03-07 00:38:38.061923 | orchestrator | changed: [testbed-manager] 2026-03-07 00:38:38.061934 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:38:38.061941 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:38:38.061947 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:38:38.061953 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:38:38.061959 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:38:38.061965 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:38:38.061971 | orchestrator | 2026-03-07 00:38:38.061978 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-07 00:38:38.061984 | orchestrator | Saturday 07 March 2026 00:38:32 +0000 (0:00:11.863) 0:00:19.320 ******** 2026-03-07 00:38:38.061990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:38:38.061997 | orchestrator | 2026-03-07 00:38:38.062003 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-07 00:38:38.062009 | orchestrator | Saturday 07 March 2026 00:38:34 +0000 (0:00:01.660) 0:00:20.981 ******** 2026-03-07 00:38:38.062062 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:38:38.062070 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:38:38.062088 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:38:38.062102 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:38:38.062109 | orchestrator | changed: [testbed-manager] 2026-03-07 00:38:38.062115 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:38:38.062121 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:38:38.062127 | orchestrator | 2026-03-07 00:38:38.062133 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:38:38.062203 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:38:38.062228 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:38:38.062235 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:38:38.062241 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:38:38.062248 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:38:38.062254 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:38:38.062260 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:38:38.062266 | orchestrator | 2026-03-07 00:38:38.062273 | orchestrator | 2026-03-07 00:38:38.062279 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:38:38.062285 | orchestrator | Saturday 07 March 2026 00:38:37 +0000 (0:00:03.073) 0:00:24.054 ******** 2026-03-07 00:38:38.062291 | orchestrator | =============================================================================== 2026-03-07 00:38:38.062298 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.86s 2026-03-07 00:38:38.062304 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 3.07s 2026-03-07 00:38:38.062310 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.75s 2026-03-07 00:38:38.062316 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.66s 2026-03-07 00:38:38.062322 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.31s 2026-03-07 00:38:38.062329 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.24s 2026-03-07 00:38:38.062341 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.15s 2026-03-07 00:38:38.062347 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.92s 2026-03-07 00:38:38.062353 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.80s 2026-03-07 00:38:38.502595 | orchestrator | ++ semver latest 7.1.1 2026-03-07 00:38:38.565574 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-07 00:38:38.565679 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-07 00:38:38.565690 | orchestrator | + sudo systemctl restart manager.service 2026-03-07 00:38:52.017590 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-07 00:38:52.017687 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-07 00:38:52.017696 | orchestrator | + local max_attempts=60 2026-03-07 00:38:52.017704 | orchestrator | + local name=ceph-ansible 2026-03-07 00:38:52.017710 | orchestrator | + local attempt_num=1 2026-03-07 00:38:52.017717 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:38:52.063335 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:38:52.063422 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:38:52.063430 | orchestrator | + sleep 5 2026-03-07 00:38:57.068644 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:38:57.406149 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:38:57.406290 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:38:57.406307 | orchestrator | + sleep 5 2026-03-07 00:39:02.408659 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:02.438557 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:02.438638 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:02.438644 | orchestrator | + sleep 5 2026-03-07 00:39:07.441087 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:07.480629 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:07.480732 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:07.480746 | orchestrator | + sleep 5 2026-03-07 00:39:12.485379 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:12.524202 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:12.524306 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:12.524326 | orchestrator | + sleep 5 2026-03-07 00:39:17.528977 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:17.568506 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:17.568616 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:17.568635 | orchestrator | + sleep 5 2026-03-07 00:39:22.573402 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:22.615806 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:22.615914 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:22.615930 | orchestrator | + sleep 5 2026-03-07 00:39:27.620560 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:27.664435 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:27.664538 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:27.664553 | orchestrator | + sleep 5 2026-03-07 00:39:32.668597 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:32.705489 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:32.705605 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:32.705621 | orchestrator | + sleep 5 2026-03-07 00:39:37.710125 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:37.751932 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:37.752075 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:37.752105 | orchestrator | + sleep 5 2026-03-07 00:39:42.756969 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:42.799562 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:42.799669 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:42.799685 | orchestrator | + sleep 5 2026-03-07 00:39:47.805353 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:47.848082 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:47.848185 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:47.848228 | orchestrator | + sleep 5 2026-03-07 00:39:52.853793 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:52.891752 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:52.891854 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:52.891870 | orchestrator | + sleep 5 2026-03-07 00:39:57.897242 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:57.941279 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:57.941383 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-07 00:39:57.941397 | orchestrator | + local max_attempts=60 2026-03-07 00:39:57.941408 | orchestrator | + local name=kolla-ansible 2026-03-07 00:39:57.941418 | orchestrator | + local attempt_num=1 2026-03-07 00:39:57.942407 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-07 00:39:57.982916 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:57.983018 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-07 00:39:57.983033 | orchestrator | + local max_attempts=60 2026-03-07 00:39:57.983045 | orchestrator | + local name=osism-ansible 2026-03-07 00:39:57.983056 | orchestrator | + local attempt_num=1 2026-03-07 00:39:57.983938 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-07 00:39:58.021111 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:58.021200 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-07 00:39:58.021211 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-07 00:39:58.202887 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-07 00:39:58.369725 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-07 00:39:58.527773 | orchestrator | ARA in osism-ansible already disabled. 2026-03-07 00:39:58.695506 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-07 00:39:58.695690 | orchestrator | + osism apply gather-facts 2026-03-07 00:40:11.085984 | orchestrator | 2026-03-07 00:40:11 | INFO  | Prepare task for execution of gather-facts. 2026-03-07 00:40:11.157163 | orchestrator | 2026-03-07 00:40:11 | INFO  | Task 05c1c4c1-78a7-48d4-9d56-fdae33ecf498 (gather-facts) was prepared for execution. 2026-03-07 00:40:11.157250 | orchestrator | 2026-03-07 00:40:11 | INFO  | It takes a moment until task 05c1c4c1-78a7-48d4-9d56-fdae33ecf498 (gather-facts) has been started and output is visible here. 2026-03-07 00:40:24.771443 | orchestrator | 2026-03-07 00:40:24.771547 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-07 00:40:24.771560 | orchestrator | 2026-03-07 00:40:24.771568 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:40:24.771576 | orchestrator | Saturday 07 March 2026 00:40:15 +0000 (0:00:00.221) 0:00:00.221 ******** 2026-03-07 00:40:24.771584 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:40:24.771592 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:40:24.771600 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:40:24.771607 | orchestrator | ok: [testbed-manager] 2026-03-07 00:40:24.771614 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:40:24.771621 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:40:24.771628 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:40:24.771636 | orchestrator | 2026-03-07 00:40:24.771643 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-07 00:40:24.771650 | orchestrator | 2026-03-07 00:40:24.771658 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-07 00:40:24.771665 | orchestrator | Saturday 07 March 2026 00:40:23 +0000 (0:00:08.373) 0:00:08.594 ******** 2026-03-07 00:40:24.771672 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:40:24.771681 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:40:24.771688 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:40:24.771695 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:40:24.771702 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:40:24.771709 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:40:24.771717 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:40:24.771724 | orchestrator | 2026-03-07 00:40:24.771731 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:40:24.771797 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:40:24.771808 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:40:24.771816 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:40:24.771839 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:40:24.771847 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:40:24.771854 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:40:24.771861 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:40:24.771869 | orchestrator | 2026-03-07 00:40:24.771876 | orchestrator | 2026-03-07 00:40:24.771883 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:40:24.771891 | orchestrator | Saturday 07 March 2026 00:40:24 +0000 (0:00:00.588) 0:00:09.183 ******** 2026-03-07 00:40:24.771898 | orchestrator | =============================================================================== 2026-03-07 00:40:24.771906 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.37s 2026-03-07 00:40:24.771913 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-03-07 00:40:25.106302 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-07 00:40:25.118455 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-07 00:40:25.139367 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-07 00:40:25.151635 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-07 00:40:25.162448 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-07 00:40:25.175537 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-07 00:40:25.188088 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-07 00:40:25.200750 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-07 00:40:25.213299 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-07 00:40:25.228291 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-07 00:40:25.241315 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-07 00:40:25.253634 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-07 00:40:25.266607 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-07 00:40:25.280892 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-07 00:40:25.293137 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-07 00:40:25.305938 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-07 00:40:25.318527 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-07 00:40:25.332037 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-07 00:40:25.342917 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-07 00:40:25.352655 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-07 00:40:25.364611 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-07 00:40:25.378218 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-07 00:40:25.397995 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-07 00:40:25.415260 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-07 00:40:25.771967 | orchestrator | ok: Runtime: 0:25:04.050332 2026-03-07 00:40:25.879597 | 2026-03-07 00:40:25.879729 | TASK [Deploy services] 2026-03-07 00:40:26.412160 | orchestrator | skipping: Conditional result was False 2026-03-07 00:40:26.434202 | 2026-03-07 00:40:26.434577 | TASK [Deploy in a nutshell] 2026-03-07 00:40:27.151910 | orchestrator | + set -e 2026-03-07 00:40:27.152075 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-07 00:40:27.152092 | orchestrator | ++ export INTERACTIVE=false 2026-03-07 00:40:27.152106 | orchestrator | ++ INTERACTIVE=false 2026-03-07 00:40:27.152114 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-07 00:40:27.152122 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-07 00:40:27.152131 | orchestrator | + source /opt/manager-vars.sh 2026-03-07 00:40:27.152161 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-07 00:40:27.152181 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-07 00:40:27.152190 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-07 00:40:27.152199 | orchestrator | ++ CEPH_VERSION=reef 2026-03-07 00:40:27.152207 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-07 00:40:27.152219 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-07 00:40:27.152226 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-07 00:40:27.152239 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-07 00:40:27.152246 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-07 00:40:27.152256 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-07 00:40:27.152262 | orchestrator | ++ export ARA=false 2026-03-07 00:40:27.152269 | orchestrator | ++ ARA=false 2026-03-07 00:40:27.152277 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-07 00:40:27.152284 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-07 00:40:27.152291 | orchestrator | ++ export TEMPEST=true 2026-03-07 00:40:27.152298 | orchestrator | ++ TEMPEST=true 2026-03-07 00:40:27.152304 | orchestrator | ++ export IS_ZUUL=true 2026-03-07 00:40:27.152311 | orchestrator | ++ IS_ZUUL=true 2026-03-07 00:40:27.152318 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.154 2026-03-07 00:40:27.152325 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.154 2026-03-07 00:40:27.152331 | orchestrator | ++ export EXTERNAL_API=false 2026-03-07 00:40:27.152338 | orchestrator | ++ EXTERNAL_API=false 2026-03-07 00:40:27.152344 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-07 00:40:27.152351 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-07 00:40:27.152358 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-07 00:40:27.152376 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-07 00:40:27.152383 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-07 00:40:27.152390 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-07 00:40:27.152397 | orchestrator | + echo 2026-03-07 00:40:27.152404 | orchestrator | 2026-03-07 00:40:27.152861 | orchestrator | # PULL IMAGES 2026-03-07 00:40:27.152884 | orchestrator | 2026-03-07 00:40:27.152892 | orchestrator | + echo '# PULL IMAGES' 2026-03-07 00:40:27.152899 | orchestrator | + echo 2026-03-07 00:40:27.153612 | orchestrator | ++ semver latest 7.0.0 2026-03-07 00:40:27.217754 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-07 00:40:27.217919 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-07 00:40:27.217977 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-07 00:40:29.268652 | orchestrator | 2026-03-07 00:40:29 | INFO  | Trying to run play pull-images in environment custom 2026-03-07 00:40:39.369730 | orchestrator | 2026-03-07 00:40:39 | INFO  | Prepare task for execution of pull-images. 2026-03-07 00:40:39.444654 | orchestrator | 2026-03-07 00:40:39 | INFO  | Task 73a4d562-d424-4a0c-b427-4306c9526113 (pull-images) was prepared for execution. 2026-03-07 00:40:39.444773 | orchestrator | 2026-03-07 00:40:39 | INFO  | Task 73a4d562-d424-4a0c-b427-4306c9526113 is running in background. No more output. Check ARA for logs. 2026-03-07 00:40:41.987159 | orchestrator | 2026-03-07 00:40:41 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-07 00:40:51.999655 | orchestrator | 2026-03-07 00:40:51 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-07 00:40:52.090648 | orchestrator | 2026-03-07 00:40:52 | INFO  | Task ceeeaea7-670f-4198-a8a5-3c5f986051b1 (wipe-partitions) was prepared for execution. 2026-03-07 00:40:52.090744 | orchestrator | 2026-03-07 00:40:52 | INFO  | It takes a moment until task ceeeaea7-670f-4198-a8a5-3c5f986051b1 (wipe-partitions) has been started and output is visible here. 2026-03-07 00:41:05.346268 | orchestrator | 2026-03-07 00:41:05.346397 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-07 00:41:05.346415 | orchestrator | 2026-03-07 00:41:05.346428 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-07 00:41:05.346446 | orchestrator | Saturday 07 March 2026 00:40:56 +0000 (0:00:00.154) 0:00:00.154 ******** 2026-03-07 00:41:05.346487 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:41:05.346501 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:41:05.346512 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:41:05.346523 | orchestrator | 2026-03-07 00:41:05.346534 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-07 00:41:05.346546 | orchestrator | Saturday 07 March 2026 00:40:57 +0000 (0:00:00.606) 0:00:00.761 ******** 2026-03-07 00:41:05.346562 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:05.346573 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:41:05.346585 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:41:05.346596 | orchestrator | 2026-03-07 00:41:05.346607 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-07 00:41:05.346618 | orchestrator | Saturday 07 March 2026 00:40:57 +0000 (0:00:00.402) 0:00:01.164 ******** 2026-03-07 00:41:05.346630 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:41:05.346642 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:41:05.346652 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:41:05.346663 | orchestrator | 2026-03-07 00:41:05.346674 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-07 00:41:05.346686 | orchestrator | Saturday 07 March 2026 00:40:58 +0000 (0:00:00.584) 0:00:01.748 ******** 2026-03-07 00:41:05.346696 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:05.346707 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:41:05.346719 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:41:05.346729 | orchestrator | 2026-03-07 00:41:05.346743 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-07 00:41:05.346756 | orchestrator | Saturday 07 March 2026 00:40:58 +0000 (0:00:00.252) 0:00:02.001 ******** 2026-03-07 00:41:05.346769 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-07 00:41:05.346786 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-07 00:41:05.346800 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-07 00:41:05.346813 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-07 00:41:05.346826 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-07 00:41:05.346839 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-07 00:41:05.346852 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-07 00:41:05.346864 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-07 00:41:05.346877 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-07 00:41:05.346890 | orchestrator | 2026-03-07 00:41:05.346903 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-07 00:41:05.346915 | orchestrator | Saturday 07 March 2026 00:40:59 +0000 (0:00:01.263) 0:00:03.264 ******** 2026-03-07 00:41:05.346928 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-07 00:41:05.346941 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-07 00:41:05.346954 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-07 00:41:05.346966 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-07 00:41:05.346979 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-07 00:41:05.346991 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-07 00:41:05.347004 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-07 00:41:05.347017 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-07 00:41:05.347030 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-07 00:41:05.347043 | orchestrator | 2026-03-07 00:41:05.347062 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-07 00:41:05.347075 | orchestrator | Saturday 07 March 2026 00:41:01 +0000 (0:00:01.619) 0:00:04.883 ******** 2026-03-07 00:41:05.347088 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-07 00:41:05.347100 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-07 00:41:05.347113 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-07 00:41:05.347123 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-07 00:41:05.347172 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-07 00:41:05.347184 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-07 00:41:05.347194 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-07 00:41:05.347205 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-07 00:41:05.347216 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-07 00:41:05.347227 | orchestrator | 2026-03-07 00:41:05.347243 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-07 00:41:05.347262 | orchestrator | Saturday 07 March 2026 00:41:03 +0000 (0:00:02.109) 0:00:06.992 ******** 2026-03-07 00:41:05.347280 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:41:05.347299 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:41:05.347319 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:41:05.347338 | orchestrator | 2026-03-07 00:41:05.347357 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-07 00:41:05.347374 | orchestrator | Saturday 07 March 2026 00:41:04 +0000 (0:00:00.600) 0:00:07.593 ******** 2026-03-07 00:41:05.347385 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:41:05.347396 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:41:05.347407 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:41:05.347418 | orchestrator | 2026-03-07 00:41:05.347429 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:41:05.347442 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:05.347454 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:05.347484 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:05.347496 | orchestrator | 2026-03-07 00:41:05.347507 | orchestrator | 2026-03-07 00:41:05.347518 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:41:05.347529 | orchestrator | Saturday 07 March 2026 00:41:04 +0000 (0:00:00.659) 0:00:08.253 ******** 2026-03-07 00:41:05.347540 | orchestrator | =============================================================================== 2026-03-07 00:41:05.347550 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.11s 2026-03-07 00:41:05.347561 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.62s 2026-03-07 00:41:05.347572 | orchestrator | Check device availability ----------------------------------------------- 1.26s 2026-03-07 00:41:05.347583 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2026-03-07 00:41:05.347593 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.61s 2026-03-07 00:41:05.347604 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-03-07 00:41:05.347615 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.58s 2026-03-07 00:41:05.347625 | orchestrator | Remove all rook related logical devices --------------------------------- 0.40s 2026-03-07 00:41:05.347636 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2026-03-07 00:41:17.872994 | orchestrator | 2026-03-07 00:41:17 | INFO  | Prepare task for execution of facts. 2026-03-07 00:41:17.958112 | orchestrator | 2026-03-07 00:41:17 | INFO  | Task f38b539a-da7c-4161-9c8b-4b5542c15dd7 (facts) was prepared for execution. 2026-03-07 00:41:17.958227 | orchestrator | 2026-03-07 00:41:17 | INFO  | It takes a moment until task f38b539a-da7c-4161-9c8b-4b5542c15dd7 (facts) has been started and output is visible here. 2026-03-07 00:41:30.589189 | orchestrator | 2026-03-07 00:41:30.589275 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-07 00:41:30.589282 | orchestrator | 2026-03-07 00:41:30.589306 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-07 00:41:30.589313 | orchestrator | Saturday 07 March 2026 00:41:22 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-03-07 00:41:30.589320 | orchestrator | ok: [testbed-manager] 2026-03-07 00:41:30.589328 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:41:30.589334 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:41:30.589384 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:41:30.589391 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:41:30.589397 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:41:30.589404 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:41:30.589409 | orchestrator | 2026-03-07 00:41:30.589416 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-07 00:41:30.589423 | orchestrator | Saturday 07 March 2026 00:41:23 +0000 (0:00:01.121) 0:00:01.418 ******** 2026-03-07 00:41:30.589429 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:41:30.589436 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:41:30.589442 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:41:30.589449 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:41:30.589456 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:30.589460 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:41:30.589464 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:41:30.589467 | orchestrator | 2026-03-07 00:41:30.589471 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-07 00:41:30.589488 | orchestrator | 2026-03-07 00:41:30.589492 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:41:30.589496 | orchestrator | Saturday 07 March 2026 00:41:24 +0000 (0:00:01.450) 0:00:02.868 ******** 2026-03-07 00:41:30.589500 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:41:30.589504 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:41:30.589508 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:41:30.589512 | orchestrator | ok: [testbed-manager] 2026-03-07 00:41:30.589515 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:41:30.589519 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:41:30.589523 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:41:30.589526 | orchestrator | 2026-03-07 00:41:30.589530 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-07 00:41:30.589534 | orchestrator | 2026-03-07 00:41:30.589538 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-07 00:41:30.589542 | orchestrator | Saturday 07 March 2026 00:41:29 +0000 (0:00:04.731) 0:00:07.600 ******** 2026-03-07 00:41:30.589545 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:41:30.589549 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:41:30.589553 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:41:30.589556 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:41:30.589560 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:30.589564 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:41:30.589567 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:41:30.589571 | orchestrator | 2026-03-07 00:41:30.589575 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:41:30.589579 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:30.589584 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:30.589588 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:30.589592 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:30.589596 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:30.589605 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:30.589609 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:30.589612 | orchestrator | 2026-03-07 00:41:30.589616 | orchestrator | 2026-03-07 00:41:30.589620 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:41:30.589624 | orchestrator | Saturday 07 March 2026 00:41:30 +0000 (0:00:00.559) 0:00:08.159 ******** 2026-03-07 00:41:30.589627 | orchestrator | =============================================================================== 2026-03-07 00:41:30.589631 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.73s 2026-03-07 00:41:30.589635 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.45s 2026-03-07 00:41:30.589638 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-03-07 00:41:30.589642 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-03-07 00:41:33.057206 | orchestrator | 2026-03-07 00:41:33 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-07 00:41:33.112721 | orchestrator | 2026-03-07 00:41:33 | INFO  | Task 980cec63-72bf-4083-b63f-11ed7fe5c1b8 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-07 00:41:33.112807 | orchestrator | 2026-03-07 00:41:33 | INFO  | It takes a moment until task 980cec63-72bf-4083-b63f-11ed7fe5c1b8 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-07 00:41:45.459830 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-07 00:41:45.459930 | orchestrator | 2.16.14 2026-03-07 00:41:45.459942 | orchestrator | 2026-03-07 00:41:45.459950 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-07 00:41:45.459957 | orchestrator | 2026-03-07 00:41:45.459964 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:41:45.459971 | orchestrator | Saturday 07 March 2026 00:41:37 +0000 (0:00:00.327) 0:00:00.327 ******** 2026-03-07 00:41:45.459978 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-07 00:41:45.459985 | orchestrator | 2026-03-07 00:41:45.459991 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:41:45.459997 | orchestrator | Saturday 07 March 2026 00:41:37 +0000 (0:00:00.260) 0:00:00.587 ******** 2026-03-07 00:41:45.460004 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:41:45.460011 | orchestrator | 2026-03-07 00:41:45.460017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460023 | orchestrator | Saturday 07 March 2026 00:41:38 +0000 (0:00:00.238) 0:00:00.825 ******** 2026-03-07 00:41:45.460038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-07 00:41:45.460044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-07 00:41:45.460050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-07 00:41:45.460057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-07 00:41:45.460064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-07 00:41:45.460070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-07 00:41:45.460076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-07 00:41:45.460082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-07 00:41:45.460089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-07 00:41:45.460095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-07 00:41:45.460116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-07 00:41:45.460122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-07 00:41:45.460128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-07 00:41:45.460134 | orchestrator | 2026-03-07 00:41:45.460140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460146 | orchestrator | Saturday 07 March 2026 00:41:38 +0000 (0:00:00.519) 0:00:01.345 ******** 2026-03-07 00:41:45.460152 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460158 | orchestrator | 2026-03-07 00:41:45.460164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460170 | orchestrator | Saturday 07 March 2026 00:41:38 +0000 (0:00:00.214) 0:00:01.559 ******** 2026-03-07 00:41:45.460176 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460183 | orchestrator | 2026-03-07 00:41:45.460189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460198 | orchestrator | Saturday 07 March 2026 00:41:39 +0000 (0:00:00.216) 0:00:01.776 ******** 2026-03-07 00:41:45.460204 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460210 | orchestrator | 2026-03-07 00:41:45.460216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460222 | orchestrator | Saturday 07 March 2026 00:41:39 +0000 (0:00:00.198) 0:00:01.974 ******** 2026-03-07 00:41:45.460229 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460235 | orchestrator | 2026-03-07 00:41:45.460241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460247 | orchestrator | Saturday 07 March 2026 00:41:39 +0000 (0:00:00.197) 0:00:02.172 ******** 2026-03-07 00:41:45.460253 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460259 | orchestrator | 2026-03-07 00:41:45.460266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460272 | orchestrator | Saturday 07 March 2026 00:41:39 +0000 (0:00:00.237) 0:00:02.410 ******** 2026-03-07 00:41:45.460278 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460284 | orchestrator | 2026-03-07 00:41:45.460289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460295 | orchestrator | Saturday 07 March 2026 00:41:39 +0000 (0:00:00.197) 0:00:02.608 ******** 2026-03-07 00:41:45.460301 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460307 | orchestrator | 2026-03-07 00:41:45.460313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460319 | orchestrator | Saturday 07 March 2026 00:41:40 +0000 (0:00:00.201) 0:00:02.809 ******** 2026-03-07 00:41:45.460325 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460331 | orchestrator | 2026-03-07 00:41:45.460337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460343 | orchestrator | Saturday 07 March 2026 00:41:40 +0000 (0:00:00.216) 0:00:03.026 ******** 2026-03-07 00:41:45.460349 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e) 2026-03-07 00:41:45.460357 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e) 2026-03-07 00:41:45.460363 | orchestrator | 2026-03-07 00:41:45.460369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460387 | orchestrator | Saturday 07 March 2026 00:41:40 +0000 (0:00:00.420) 0:00:03.447 ******** 2026-03-07 00:41:45.460394 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e) 2026-03-07 00:41:45.460400 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e) 2026-03-07 00:41:45.460406 | orchestrator | 2026-03-07 00:41:45.460416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460427 | orchestrator | Saturday 07 March 2026 00:41:41 +0000 (0:00:00.674) 0:00:04.122 ******** 2026-03-07 00:41:45.460433 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd) 2026-03-07 00:41:45.460439 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd) 2026-03-07 00:41:45.460445 | orchestrator | 2026-03-07 00:41:45.460451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460457 | orchestrator | Saturday 07 March 2026 00:41:42 +0000 (0:00:00.689) 0:00:04.811 ******** 2026-03-07 00:41:45.460487 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da) 2026-03-07 00:41:45.460494 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da) 2026-03-07 00:41:45.460501 | orchestrator | 2026-03-07 00:41:45.460507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:41:45.460513 | orchestrator | Saturday 07 March 2026 00:41:43 +0000 (0:00:01.052) 0:00:05.863 ******** 2026-03-07 00:41:45.460519 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:41:45.460525 | orchestrator | 2026-03-07 00:41:45.460531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:45.460536 | orchestrator | Saturday 07 March 2026 00:41:43 +0000 (0:00:00.369) 0:00:06.233 ******** 2026-03-07 00:41:45.460542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-07 00:41:45.460548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-07 00:41:45.460555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-07 00:41:45.460561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-07 00:41:45.460566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-07 00:41:45.460572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-07 00:41:45.460578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-07 00:41:45.460584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-07 00:41:45.460590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-07 00:41:45.460597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-07 00:41:45.460603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-07 00:41:45.460609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-07 00:41:45.460614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-07 00:41:45.460620 | orchestrator | 2026-03-07 00:41:45.460626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:45.460633 | orchestrator | Saturday 07 March 2026 00:41:43 +0000 (0:00:00.420) 0:00:06.653 ******** 2026-03-07 00:41:45.460639 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460644 | orchestrator | 2026-03-07 00:41:45.460651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:45.460657 | orchestrator | Saturday 07 March 2026 00:41:44 +0000 (0:00:00.222) 0:00:06.875 ******** 2026-03-07 00:41:45.460663 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460669 | orchestrator | 2026-03-07 00:41:45.460674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:45.460681 | orchestrator | Saturday 07 March 2026 00:41:44 +0000 (0:00:00.225) 0:00:07.101 ******** 2026-03-07 00:41:45.460687 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460697 | orchestrator | 2026-03-07 00:41:45.460704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:45.460710 | orchestrator | Saturday 07 March 2026 00:41:44 +0000 (0:00:00.211) 0:00:07.313 ******** 2026-03-07 00:41:45.460716 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460723 | orchestrator | 2026-03-07 00:41:45.460728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:45.460734 | orchestrator | Saturday 07 March 2026 00:41:44 +0000 (0:00:00.211) 0:00:07.524 ******** 2026-03-07 00:41:45.460740 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460746 | orchestrator | 2026-03-07 00:41:45.460752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:45.460758 | orchestrator | Saturday 07 March 2026 00:41:45 +0000 (0:00:00.194) 0:00:07.719 ******** 2026-03-07 00:41:45.460764 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460770 | orchestrator | 2026-03-07 00:41:45.460776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:45.460782 | orchestrator | Saturday 07 March 2026 00:41:45 +0000 (0:00:00.214) 0:00:07.933 ******** 2026-03-07 00:41:45.460788 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:45.460795 | orchestrator | 2026-03-07 00:41:45.460806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:53.881839 | orchestrator | Saturday 07 March 2026 00:41:45 +0000 (0:00:00.207) 0:00:08.141 ******** 2026-03-07 00:41:53.881976 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.882002 | orchestrator | 2026-03-07 00:41:53.882106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:53.882121 | orchestrator | Saturday 07 March 2026 00:41:45 +0000 (0:00:00.263) 0:00:08.404 ******** 2026-03-07 00:41:53.882132 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-07 00:41:53.882144 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-07 00:41:53.882155 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-07 00:41:53.882166 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-07 00:41:53.882177 | orchestrator | 2026-03-07 00:41:53.882189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:53.882221 | orchestrator | Saturday 07 March 2026 00:41:46 +0000 (0:00:01.087) 0:00:09.492 ******** 2026-03-07 00:41:53.882233 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.882243 | orchestrator | 2026-03-07 00:41:53.882254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:53.882265 | orchestrator | Saturday 07 March 2026 00:41:47 +0000 (0:00:00.199) 0:00:09.691 ******** 2026-03-07 00:41:53.882276 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.882287 | orchestrator | 2026-03-07 00:41:53.882297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:53.882308 | orchestrator | Saturday 07 March 2026 00:41:47 +0000 (0:00:00.244) 0:00:09.936 ******** 2026-03-07 00:41:53.882319 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.882329 | orchestrator | 2026-03-07 00:41:53.882340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:41:53.882352 | orchestrator | Saturday 07 March 2026 00:41:47 +0000 (0:00:00.223) 0:00:10.160 ******** 2026-03-07 00:41:53.882365 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.882378 | orchestrator | 2026-03-07 00:41:53.882390 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-07 00:41:53.882402 | orchestrator | Saturday 07 March 2026 00:41:47 +0000 (0:00:00.207) 0:00:10.368 ******** 2026-03-07 00:41:53.882414 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-07 00:41:53.882426 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-07 00:41:53.882438 | orchestrator | 2026-03-07 00:41:53.882451 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-07 00:41:53.882463 | orchestrator | Saturday 07 March 2026 00:41:47 +0000 (0:00:00.238) 0:00:10.606 ******** 2026-03-07 00:41:53.882497 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.882509 | orchestrator | 2026-03-07 00:41:53.882522 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-07 00:41:53.882565 | orchestrator | Saturday 07 March 2026 00:41:48 +0000 (0:00:00.164) 0:00:10.770 ******** 2026-03-07 00:41:53.882578 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.882591 | orchestrator | 2026-03-07 00:41:53.882602 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-07 00:41:53.882613 | orchestrator | Saturday 07 March 2026 00:41:48 +0000 (0:00:00.117) 0:00:10.888 ******** 2026-03-07 00:41:53.882623 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.882634 | orchestrator | 2026-03-07 00:41:53.882645 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-07 00:41:53.882655 | orchestrator | Saturday 07 March 2026 00:41:48 +0000 (0:00:00.127) 0:00:11.015 ******** 2026-03-07 00:41:53.882666 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:41:53.882676 | orchestrator | 2026-03-07 00:41:53.882687 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-07 00:41:53.882698 | orchestrator | Saturday 07 March 2026 00:41:48 +0000 (0:00:00.130) 0:00:11.146 ******** 2026-03-07 00:41:53.882710 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9f941f3-03bb-56ef-8ac7-c30bc8004c51'}}) 2026-03-07 00:41:53.882721 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6cee2ec4-9e84-549b-8075-e81043ce518c'}}) 2026-03-07 00:41:53.882732 | orchestrator | 2026-03-07 00:41:53.882743 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-07 00:41:53.882753 | orchestrator | Saturday 07 March 2026 00:41:48 +0000 (0:00:00.154) 0:00:11.301 ******** 2026-03-07 00:41:53.882765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9f941f3-03bb-56ef-8ac7-c30bc8004c51'}})  2026-03-07 00:41:53.882785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6cee2ec4-9e84-549b-8075-e81043ce518c'}})  2026-03-07 00:41:53.882802 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.882813 | orchestrator | 2026-03-07 00:41:53.882824 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-07 00:41:53.882834 | orchestrator | Saturday 07 March 2026 00:41:48 +0000 (0:00:00.188) 0:00:11.490 ******** 2026-03-07 00:41:53.882845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9f941f3-03bb-56ef-8ac7-c30bc8004c51'}})  2026-03-07 00:41:53.882856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6cee2ec4-9e84-549b-8075-e81043ce518c'}})  2026-03-07 00:41:53.882867 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.882877 | orchestrator | 2026-03-07 00:41:53.882888 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-07 00:41:53.882898 | orchestrator | Saturday 07 March 2026 00:41:49 +0000 (0:00:00.400) 0:00:11.890 ******** 2026-03-07 00:41:53.882909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9f941f3-03bb-56ef-8ac7-c30bc8004c51'}})  2026-03-07 00:41:53.882944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6cee2ec4-9e84-549b-8075-e81043ce518c'}})  2026-03-07 00:41:53.882963 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.882981 | orchestrator | 2026-03-07 00:41:53.882999 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-07 00:41:53.883017 | orchestrator | Saturday 07 March 2026 00:41:49 +0000 (0:00:00.162) 0:00:12.053 ******** 2026-03-07 00:41:53.883035 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:41:53.883053 | orchestrator | 2026-03-07 00:41:53.883068 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-07 00:41:53.883085 | orchestrator | Saturday 07 March 2026 00:41:49 +0000 (0:00:00.154) 0:00:12.207 ******** 2026-03-07 00:41:53.883103 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:41:53.883132 | orchestrator | 2026-03-07 00:41:53.883149 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-07 00:41:53.883166 | orchestrator | Saturday 07 March 2026 00:41:49 +0000 (0:00:00.154) 0:00:12.362 ******** 2026-03-07 00:41:53.883184 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.883202 | orchestrator | 2026-03-07 00:41:53.883219 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-07 00:41:53.883236 | orchestrator | Saturday 07 March 2026 00:41:49 +0000 (0:00:00.138) 0:00:12.500 ******** 2026-03-07 00:41:53.883254 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.883272 | orchestrator | 2026-03-07 00:41:53.883290 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-07 00:41:53.883309 | orchestrator | Saturday 07 March 2026 00:41:49 +0000 (0:00:00.133) 0:00:12.634 ******** 2026-03-07 00:41:53.883328 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.883346 | orchestrator | 2026-03-07 00:41:53.883363 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-07 00:41:53.883374 | orchestrator | Saturday 07 March 2026 00:41:50 +0000 (0:00:00.137) 0:00:12.772 ******** 2026-03-07 00:41:53.883384 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:41:53.883395 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:41:53.883406 | orchestrator |  "sdb": { 2026-03-07 00:41:53.883417 | orchestrator |  "osd_lvm_uuid": "e9f941f3-03bb-56ef-8ac7-c30bc8004c51" 2026-03-07 00:41:53.883428 | orchestrator |  }, 2026-03-07 00:41:53.883439 | orchestrator |  "sdc": { 2026-03-07 00:41:53.883449 | orchestrator |  "osd_lvm_uuid": "6cee2ec4-9e84-549b-8075-e81043ce518c" 2026-03-07 00:41:53.883460 | orchestrator |  } 2026-03-07 00:41:53.883471 | orchestrator |  } 2026-03-07 00:41:53.883482 | orchestrator | } 2026-03-07 00:41:53.883493 | orchestrator | 2026-03-07 00:41:53.883503 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-07 00:41:53.883514 | orchestrator | Saturday 07 March 2026 00:41:50 +0000 (0:00:00.166) 0:00:12.939 ******** 2026-03-07 00:41:53.883525 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.883579 | orchestrator | 2026-03-07 00:41:53.883591 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-07 00:41:53.883602 | orchestrator | Saturday 07 March 2026 00:41:50 +0000 (0:00:00.152) 0:00:13.091 ******** 2026-03-07 00:41:53.883613 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.883624 | orchestrator | 2026-03-07 00:41:53.883635 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-07 00:41:53.883646 | orchestrator | Saturday 07 March 2026 00:41:50 +0000 (0:00:00.154) 0:00:13.245 ******** 2026-03-07 00:41:53.883656 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:53.883667 | orchestrator | 2026-03-07 00:41:53.883683 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-07 00:41:53.883701 | orchestrator | Saturday 07 March 2026 00:41:50 +0000 (0:00:00.138) 0:00:13.384 ******** 2026-03-07 00:41:53.883716 | orchestrator | changed: [testbed-node-3] => { 2026-03-07 00:41:53.883733 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-07 00:41:53.883753 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:41:53.883772 | orchestrator |  "sdb": { 2026-03-07 00:41:53.883791 | orchestrator |  "osd_lvm_uuid": "e9f941f3-03bb-56ef-8ac7-c30bc8004c51" 2026-03-07 00:41:53.883810 | orchestrator |  }, 2026-03-07 00:41:53.883821 | orchestrator |  "sdc": { 2026-03-07 00:41:53.883832 | orchestrator |  "osd_lvm_uuid": "6cee2ec4-9e84-549b-8075-e81043ce518c" 2026-03-07 00:41:53.883842 | orchestrator |  } 2026-03-07 00:41:53.883853 | orchestrator |  }, 2026-03-07 00:41:53.883864 | orchestrator |  "lvm_volumes": [ 2026-03-07 00:41:53.883875 | orchestrator |  { 2026-03-07 00:41:53.883885 | orchestrator |  "data": "osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51", 2026-03-07 00:41:53.883896 | orchestrator |  "data_vg": "ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51" 2026-03-07 00:41:53.883918 | orchestrator |  }, 2026-03-07 00:41:53.883929 | orchestrator |  { 2026-03-07 00:41:53.883939 | orchestrator |  "data": "osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c", 2026-03-07 00:41:53.883950 | orchestrator |  "data_vg": "ceph-6cee2ec4-9e84-549b-8075-e81043ce518c" 2026-03-07 00:41:53.883961 | orchestrator |  } 2026-03-07 00:41:53.883971 | orchestrator |  ] 2026-03-07 00:41:53.883981 | orchestrator |  } 2026-03-07 00:41:53.883992 | orchestrator | } 2026-03-07 00:41:53.884003 | orchestrator | 2026-03-07 00:41:53.884014 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-07 00:41:53.884024 | orchestrator | Saturday 07 March 2026 00:41:51 +0000 (0:00:00.453) 0:00:13.838 ******** 2026-03-07 00:41:53.884035 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-07 00:41:53.884046 | orchestrator | 2026-03-07 00:41:53.884056 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-07 00:41:53.884067 | orchestrator | 2026-03-07 00:41:53.884077 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:41:53.884088 | orchestrator | Saturday 07 March 2026 00:41:53 +0000 (0:00:01.912) 0:00:15.750 ******** 2026-03-07 00:41:53.884098 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-07 00:41:53.884109 | orchestrator | 2026-03-07 00:41:53.884120 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:41:53.884131 | orchestrator | Saturday 07 March 2026 00:41:53 +0000 (0:00:00.280) 0:00:16.030 ******** 2026-03-07 00:41:53.884141 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:41:53.884152 | orchestrator | 2026-03-07 00:41:53.884175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045010 | orchestrator | Saturday 07 March 2026 00:41:53 +0000 (0:00:00.530) 0:00:16.561 ******** 2026-03-07 00:42:03.045124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-07 00:42:03.045139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-07 00:42:03.045151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-07 00:42:03.045162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-07 00:42:03.045173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-07 00:42:03.045184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-07 00:42:03.045195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-07 00:42:03.045211 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-07 00:42:03.045222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-07 00:42:03.045234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-07 00:42:03.045246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-07 00:42:03.045256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-07 00:42:03.045287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-07 00:42:03.045299 | orchestrator | 2026-03-07 00:42:03.045312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045323 | orchestrator | Saturday 07 March 2026 00:41:54 +0000 (0:00:00.494) 0:00:17.056 ******** 2026-03-07 00:42:03.045334 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.045346 | orchestrator | 2026-03-07 00:42:03.045357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045368 | orchestrator | Saturday 07 March 2026 00:41:54 +0000 (0:00:00.209) 0:00:17.266 ******** 2026-03-07 00:42:03.045399 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.045411 | orchestrator | 2026-03-07 00:42:03.045422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045433 | orchestrator | Saturday 07 March 2026 00:41:54 +0000 (0:00:00.196) 0:00:17.462 ******** 2026-03-07 00:42:03.045443 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.045454 | orchestrator | 2026-03-07 00:42:03.045465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045476 | orchestrator | Saturday 07 March 2026 00:41:55 +0000 (0:00:00.226) 0:00:17.688 ******** 2026-03-07 00:42:03.045487 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.045498 | orchestrator | 2026-03-07 00:42:03.045509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045520 | orchestrator | Saturday 07 March 2026 00:41:55 +0000 (0:00:00.226) 0:00:17.915 ******** 2026-03-07 00:42:03.045533 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.045547 | orchestrator | 2026-03-07 00:42:03.045560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045573 | orchestrator | Saturday 07 March 2026 00:41:55 +0000 (0:00:00.735) 0:00:18.650 ******** 2026-03-07 00:42:03.045585 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.045599 | orchestrator | 2026-03-07 00:42:03.045647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045666 | orchestrator | Saturday 07 March 2026 00:41:56 +0000 (0:00:00.230) 0:00:18.881 ******** 2026-03-07 00:42:03.045679 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.045692 | orchestrator | 2026-03-07 00:42:03.045706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045718 | orchestrator | Saturday 07 March 2026 00:41:56 +0000 (0:00:00.223) 0:00:19.105 ******** 2026-03-07 00:42:03.045732 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.045744 | orchestrator | 2026-03-07 00:42:03.045757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045770 | orchestrator | Saturday 07 March 2026 00:41:56 +0000 (0:00:00.240) 0:00:19.345 ******** 2026-03-07 00:42:03.045789 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6) 2026-03-07 00:42:03.045809 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6) 2026-03-07 00:42:03.045826 | orchestrator | 2026-03-07 00:42:03.045873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045892 | orchestrator | Saturday 07 March 2026 00:41:57 +0000 (0:00:00.599) 0:00:19.945 ******** 2026-03-07 00:42:03.045912 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15) 2026-03-07 00:42:03.045932 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15) 2026-03-07 00:42:03.045951 | orchestrator | 2026-03-07 00:42:03.045964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.045975 | orchestrator | Saturday 07 March 2026 00:41:57 +0000 (0:00:00.547) 0:00:20.492 ******** 2026-03-07 00:42:03.045986 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359) 2026-03-07 00:42:03.045997 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359) 2026-03-07 00:42:03.046008 | orchestrator | 2026-03-07 00:42:03.046080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.046112 | orchestrator | Saturday 07 March 2026 00:41:58 +0000 (0:00:00.470) 0:00:20.963 ******** 2026-03-07 00:42:03.046124 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e) 2026-03-07 00:42:03.046135 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e) 2026-03-07 00:42:03.046146 | orchestrator | 2026-03-07 00:42:03.046169 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:03.046180 | orchestrator | Saturday 07 March 2026 00:41:58 +0000 (0:00:00.482) 0:00:21.445 ******** 2026-03-07 00:42:03.046191 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:42:03.046202 | orchestrator | 2026-03-07 00:42:03.046213 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:03.046224 | orchestrator | Saturday 07 March 2026 00:41:59 +0000 (0:00:00.380) 0:00:21.825 ******** 2026-03-07 00:42:03.046235 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-07 00:42:03.046246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-07 00:42:03.046268 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-07 00:42:03.046321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-07 00:42:03.046333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-07 00:42:03.046344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-07 00:42:03.046355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-07 00:42:03.046365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-07 00:42:03.046376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-07 00:42:03.046387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-07 00:42:03.046398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-07 00:42:03.046408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-07 00:42:03.046419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-07 00:42:03.046430 | orchestrator | 2026-03-07 00:42:03.046441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:03.046451 | orchestrator | Saturday 07 March 2026 00:41:59 +0000 (0:00:00.415) 0:00:22.241 ******** 2026-03-07 00:42:03.046462 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.046473 | orchestrator | 2026-03-07 00:42:03.046484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:03.046495 | orchestrator | Saturday 07 March 2026 00:42:00 +0000 (0:00:00.751) 0:00:22.992 ******** 2026-03-07 00:42:03.046505 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.046516 | orchestrator | 2026-03-07 00:42:03.046527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:03.046538 | orchestrator | Saturday 07 March 2026 00:42:00 +0000 (0:00:00.232) 0:00:23.225 ******** 2026-03-07 00:42:03.046549 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.046559 | orchestrator | 2026-03-07 00:42:03.046570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:03.046581 | orchestrator | Saturday 07 March 2026 00:42:00 +0000 (0:00:00.208) 0:00:23.433 ******** 2026-03-07 00:42:03.046592 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.046692 | orchestrator | 2026-03-07 00:42:03.046708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:03.046719 | orchestrator | Saturday 07 March 2026 00:42:00 +0000 (0:00:00.213) 0:00:23.647 ******** 2026-03-07 00:42:03.046730 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.046741 | orchestrator | 2026-03-07 00:42:03.046752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:03.046763 | orchestrator | Saturday 07 March 2026 00:42:01 +0000 (0:00:00.212) 0:00:23.859 ******** 2026-03-07 00:42:03.046774 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.046795 | orchestrator | 2026-03-07 00:42:03.046806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:03.046817 | orchestrator | Saturday 07 March 2026 00:42:01 +0000 (0:00:00.206) 0:00:24.065 ******** 2026-03-07 00:42:03.046828 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.046839 | orchestrator | 2026-03-07 00:42:03.046850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:03.046861 | orchestrator | Saturday 07 March 2026 00:42:01 +0000 (0:00:00.219) 0:00:24.285 ******** 2026-03-07 00:42:03.046872 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:03.046883 | orchestrator | 2026-03-07 00:42:03.046894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:03.046911 | orchestrator | Saturday 07 March 2026 00:42:01 +0000 (0:00:00.344) 0:00:24.630 ******** 2026-03-07 00:42:03.046931 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-07 00:42:03.046960 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-07 00:42:03.046985 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-07 00:42:03.047001 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-07 00:42:03.047018 | orchestrator | 2026-03-07 00:42:03.047034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:03.047050 | orchestrator | Saturday 07 March 2026 00:42:02 +0000 (0:00:00.947) 0:00:25.577 ******** 2026-03-07 00:42:03.047067 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.732351 | orchestrator | 2026-03-07 00:42:10.732467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:10.732485 | orchestrator | Saturday 07 March 2026 00:42:03 +0000 (0:00:00.235) 0:00:25.812 ******** 2026-03-07 00:42:10.732497 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.732509 | orchestrator | 2026-03-07 00:42:10.732520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:10.732531 | orchestrator | Saturday 07 March 2026 00:42:03 +0000 (0:00:00.323) 0:00:26.136 ******** 2026-03-07 00:42:10.732542 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.732553 | orchestrator | 2026-03-07 00:42:10.732563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:10.732574 | orchestrator | Saturday 07 March 2026 00:42:03 +0000 (0:00:00.289) 0:00:26.425 ******** 2026-03-07 00:42:10.732585 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.732596 | orchestrator | 2026-03-07 00:42:10.732606 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-07 00:42:10.732617 | orchestrator | Saturday 07 March 2026 00:42:04 +0000 (0:00:01.098) 0:00:27.524 ******** 2026-03-07 00:42:10.732628 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-07 00:42:10.732638 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-07 00:42:10.732648 | orchestrator | 2026-03-07 00:42:10.732658 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-07 00:42:10.732752 | orchestrator | Saturday 07 March 2026 00:42:05 +0000 (0:00:00.185) 0:00:27.710 ******** 2026-03-07 00:42:10.732763 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.732773 | orchestrator | 2026-03-07 00:42:10.732782 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-07 00:42:10.732792 | orchestrator | Saturday 07 March 2026 00:42:05 +0000 (0:00:00.163) 0:00:27.873 ******** 2026-03-07 00:42:10.732801 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.732811 | orchestrator | 2026-03-07 00:42:10.732820 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-07 00:42:10.732835 | orchestrator | Saturday 07 March 2026 00:42:05 +0000 (0:00:00.172) 0:00:28.046 ******** 2026-03-07 00:42:10.732845 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.732856 | orchestrator | 2026-03-07 00:42:10.732867 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-07 00:42:10.732878 | orchestrator | Saturday 07 March 2026 00:42:05 +0000 (0:00:00.145) 0:00:28.191 ******** 2026-03-07 00:42:10.732912 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:42:10.732925 | orchestrator | 2026-03-07 00:42:10.732936 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-07 00:42:10.732948 | orchestrator | Saturday 07 March 2026 00:42:05 +0000 (0:00:00.141) 0:00:28.333 ******** 2026-03-07 00:42:10.732960 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'}}) 2026-03-07 00:42:10.732972 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50ec861c-6b17-5421-b6cb-257ea2a8b129'}}) 2026-03-07 00:42:10.732983 | orchestrator | 2026-03-07 00:42:10.732994 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-07 00:42:10.733006 | orchestrator | Saturday 07 March 2026 00:42:05 +0000 (0:00:00.197) 0:00:28.531 ******** 2026-03-07 00:42:10.733017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'}})  2026-03-07 00:42:10.733031 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50ec861c-6b17-5421-b6cb-257ea2a8b129'}})  2026-03-07 00:42:10.733042 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.733053 | orchestrator | 2026-03-07 00:42:10.733064 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-07 00:42:10.733075 | orchestrator | Saturday 07 March 2026 00:42:05 +0000 (0:00:00.151) 0:00:28.682 ******** 2026-03-07 00:42:10.733086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'}})  2026-03-07 00:42:10.733098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50ec861c-6b17-5421-b6cb-257ea2a8b129'}})  2026-03-07 00:42:10.733110 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.733119 | orchestrator | 2026-03-07 00:42:10.733129 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-07 00:42:10.733139 | orchestrator | Saturday 07 March 2026 00:42:06 +0000 (0:00:00.176) 0:00:28.859 ******** 2026-03-07 00:42:10.733148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'}})  2026-03-07 00:42:10.733158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50ec861c-6b17-5421-b6cb-257ea2a8b129'}})  2026-03-07 00:42:10.733167 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.733177 | orchestrator | 2026-03-07 00:42:10.733186 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-07 00:42:10.733196 | orchestrator | Saturday 07 March 2026 00:42:06 +0000 (0:00:00.193) 0:00:29.052 ******** 2026-03-07 00:42:10.733205 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:42:10.733214 | orchestrator | 2026-03-07 00:42:10.733224 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-07 00:42:10.733234 | orchestrator | Saturday 07 March 2026 00:42:06 +0000 (0:00:00.143) 0:00:29.196 ******** 2026-03-07 00:42:10.733243 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:42:10.733252 | orchestrator | 2026-03-07 00:42:10.733262 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-07 00:42:10.733271 | orchestrator | Saturday 07 March 2026 00:42:06 +0000 (0:00:00.146) 0:00:29.342 ******** 2026-03-07 00:42:10.733297 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.733307 | orchestrator | 2026-03-07 00:42:10.733317 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-07 00:42:10.733326 | orchestrator | Saturday 07 March 2026 00:42:07 +0000 (0:00:00.347) 0:00:29.690 ******** 2026-03-07 00:42:10.733336 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.733345 | orchestrator | 2026-03-07 00:42:10.733355 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-07 00:42:10.733364 | orchestrator | Saturday 07 March 2026 00:42:07 +0000 (0:00:00.159) 0:00:29.849 ******** 2026-03-07 00:42:10.733374 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.733390 | orchestrator | 2026-03-07 00:42:10.733400 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-07 00:42:10.733410 | orchestrator | Saturday 07 March 2026 00:42:07 +0000 (0:00:00.153) 0:00:30.002 ******** 2026-03-07 00:42:10.733419 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:42:10.733429 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:42:10.733439 | orchestrator |  "sdb": { 2026-03-07 00:42:10.733449 | orchestrator |  "osd_lvm_uuid": "c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c" 2026-03-07 00:42:10.733458 | orchestrator |  }, 2026-03-07 00:42:10.733468 | orchestrator |  "sdc": { 2026-03-07 00:42:10.733477 | orchestrator |  "osd_lvm_uuid": "50ec861c-6b17-5421-b6cb-257ea2a8b129" 2026-03-07 00:42:10.733487 | orchestrator |  } 2026-03-07 00:42:10.733496 | orchestrator |  } 2026-03-07 00:42:10.733506 | orchestrator | } 2026-03-07 00:42:10.733516 | orchestrator | 2026-03-07 00:42:10.733526 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-07 00:42:10.733536 | orchestrator | Saturday 07 March 2026 00:42:07 +0000 (0:00:00.154) 0:00:30.157 ******** 2026-03-07 00:42:10.733545 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.733555 | orchestrator | 2026-03-07 00:42:10.733564 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-07 00:42:10.733574 | orchestrator | Saturday 07 March 2026 00:42:07 +0000 (0:00:00.130) 0:00:30.288 ******** 2026-03-07 00:42:10.733583 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.733592 | orchestrator | 2026-03-07 00:42:10.733602 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-07 00:42:10.733612 | orchestrator | Saturday 07 March 2026 00:42:07 +0000 (0:00:00.134) 0:00:30.423 ******** 2026-03-07 00:42:10.733621 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:10.733631 | orchestrator | 2026-03-07 00:42:10.733640 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-07 00:42:10.733655 | orchestrator | Saturday 07 March 2026 00:42:07 +0000 (0:00:00.128) 0:00:30.551 ******** 2026-03-07 00:42:10.733752 | orchestrator | changed: [testbed-node-4] => { 2026-03-07 00:42:10.733765 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-07 00:42:10.733775 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:42:10.733785 | orchestrator |  "sdb": { 2026-03-07 00:42:10.733794 | orchestrator |  "osd_lvm_uuid": "c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c" 2026-03-07 00:42:10.733804 | orchestrator |  }, 2026-03-07 00:42:10.733814 | orchestrator |  "sdc": { 2026-03-07 00:42:10.733823 | orchestrator |  "osd_lvm_uuid": "50ec861c-6b17-5421-b6cb-257ea2a8b129" 2026-03-07 00:42:10.733833 | orchestrator |  } 2026-03-07 00:42:10.733842 | orchestrator |  }, 2026-03-07 00:42:10.733852 | orchestrator |  "lvm_volumes": [ 2026-03-07 00:42:10.733861 | orchestrator |  { 2026-03-07 00:42:10.733871 | orchestrator |  "data": "osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c", 2026-03-07 00:42:10.733881 | orchestrator |  "data_vg": "ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c" 2026-03-07 00:42:10.733890 | orchestrator |  }, 2026-03-07 00:42:10.733902 | orchestrator |  { 2026-03-07 00:42:10.733918 | orchestrator |  "data": "osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129", 2026-03-07 00:42:10.733935 | orchestrator |  "data_vg": "ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129" 2026-03-07 00:42:10.733951 | orchestrator |  } 2026-03-07 00:42:10.733967 | orchestrator |  ] 2026-03-07 00:42:10.733982 | orchestrator |  } 2026-03-07 00:42:10.733998 | orchestrator | } 2026-03-07 00:42:10.734013 | orchestrator | 2026-03-07 00:42:10.734139 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-07 00:42:10.734158 | orchestrator | Saturday 07 March 2026 00:42:08 +0000 (0:00:00.219) 0:00:30.770 ******** 2026-03-07 00:42:10.734174 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-07 00:42:10.734189 | orchestrator | 2026-03-07 00:42:10.734210 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-07 00:42:10.734220 | orchestrator | 2026-03-07 00:42:10.734230 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:42:10.734239 | orchestrator | Saturday 07 March 2026 00:42:09 +0000 (0:00:01.125) 0:00:31.896 ******** 2026-03-07 00:42:10.734249 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-07 00:42:10.734259 | orchestrator | 2026-03-07 00:42:10.734268 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:42:10.734278 | orchestrator | Saturday 07 March 2026 00:42:09 +0000 (0:00:00.749) 0:00:32.646 ******** 2026-03-07 00:42:10.734288 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:42:10.734298 | orchestrator | 2026-03-07 00:42:10.734307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:10.734317 | orchestrator | Saturday 07 March 2026 00:42:10 +0000 (0:00:00.311) 0:00:32.957 ******** 2026-03-07 00:42:10.734326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-07 00:42:10.734336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-07 00:42:10.734346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-07 00:42:10.734356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-07 00:42:10.734365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-07 00:42:10.734386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-07 00:42:19.886884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-07 00:42:19.886969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-07 00:42:19.886975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-07 00:42:19.886980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-07 00:42:19.886984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-07 00:42:19.886988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-07 00:42:19.886992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-07 00:42:19.886996 | orchestrator | 2026-03-07 00:42:19.887002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887007 | orchestrator | Saturday 07 March 2026 00:42:10 +0000 (0:00:00.578) 0:00:33.536 ******** 2026-03-07 00:42:19.887011 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887015 | orchestrator | 2026-03-07 00:42:19.887020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887024 | orchestrator | Saturday 07 March 2026 00:42:11 +0000 (0:00:00.271) 0:00:33.808 ******** 2026-03-07 00:42:19.887027 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887031 | orchestrator | 2026-03-07 00:42:19.887035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887039 | orchestrator | Saturday 07 March 2026 00:42:11 +0000 (0:00:00.277) 0:00:34.085 ******** 2026-03-07 00:42:19.887042 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887046 | orchestrator | 2026-03-07 00:42:19.887050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887054 | orchestrator | Saturday 07 March 2026 00:42:11 +0000 (0:00:00.215) 0:00:34.300 ******** 2026-03-07 00:42:19.887058 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887061 | orchestrator | 2026-03-07 00:42:19.887065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887069 | orchestrator | Saturday 07 March 2026 00:42:11 +0000 (0:00:00.187) 0:00:34.488 ******** 2026-03-07 00:42:19.887088 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887092 | orchestrator | 2026-03-07 00:42:19.887096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887100 | orchestrator | Saturday 07 March 2026 00:42:12 +0000 (0:00:00.216) 0:00:34.704 ******** 2026-03-07 00:42:19.887104 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887107 | orchestrator | 2026-03-07 00:42:19.887111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887115 | orchestrator | Saturday 07 March 2026 00:42:12 +0000 (0:00:00.199) 0:00:34.903 ******** 2026-03-07 00:42:19.887119 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887122 | orchestrator | 2026-03-07 00:42:19.887126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887130 | orchestrator | Saturday 07 March 2026 00:42:12 +0000 (0:00:00.215) 0:00:35.119 ******** 2026-03-07 00:42:19.887134 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887147 | orchestrator | 2026-03-07 00:42:19.887151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887154 | orchestrator | Saturday 07 March 2026 00:42:12 +0000 (0:00:00.209) 0:00:35.329 ******** 2026-03-07 00:42:19.887158 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86) 2026-03-07 00:42:19.887163 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86) 2026-03-07 00:42:19.887167 | orchestrator | 2026-03-07 00:42:19.887171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887174 | orchestrator | Saturday 07 March 2026 00:42:13 +0000 (0:00:00.881) 0:00:36.211 ******** 2026-03-07 00:42:19.887191 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103) 2026-03-07 00:42:19.887195 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103) 2026-03-07 00:42:19.887199 | orchestrator | 2026-03-07 00:42:19.887203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887207 | orchestrator | Saturday 07 March 2026 00:42:14 +0000 (0:00:00.506) 0:00:36.718 ******** 2026-03-07 00:42:19.887210 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc) 2026-03-07 00:42:19.887214 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc) 2026-03-07 00:42:19.887218 | orchestrator | 2026-03-07 00:42:19.887222 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887225 | orchestrator | Saturday 07 March 2026 00:42:14 +0000 (0:00:00.469) 0:00:37.187 ******** 2026-03-07 00:42:19.887229 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d) 2026-03-07 00:42:19.887233 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d) 2026-03-07 00:42:19.887237 | orchestrator | 2026-03-07 00:42:19.887241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:19.887244 | orchestrator | Saturday 07 March 2026 00:42:15 +0000 (0:00:00.552) 0:00:37.740 ******** 2026-03-07 00:42:19.887248 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:42:19.887252 | orchestrator | 2026-03-07 00:42:19.887256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887271 | orchestrator | Saturday 07 March 2026 00:42:15 +0000 (0:00:00.414) 0:00:38.154 ******** 2026-03-07 00:42:19.887275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-07 00:42:19.887279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-07 00:42:19.887283 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-07 00:42:19.887287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-07 00:42:19.887294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-07 00:42:19.887298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-07 00:42:19.887302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-07 00:42:19.887306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-07 00:42:19.887309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-07 00:42:19.887313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-07 00:42:19.887317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-07 00:42:19.887321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-07 00:42:19.887324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-07 00:42:19.887328 | orchestrator | 2026-03-07 00:42:19.887332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887336 | orchestrator | Saturday 07 March 2026 00:42:15 +0000 (0:00:00.420) 0:00:38.575 ******** 2026-03-07 00:42:19.887339 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887343 | orchestrator | 2026-03-07 00:42:19.887347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887351 | orchestrator | Saturday 07 March 2026 00:42:16 +0000 (0:00:00.300) 0:00:38.875 ******** 2026-03-07 00:42:19.887354 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887358 | orchestrator | 2026-03-07 00:42:19.887362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887366 | orchestrator | Saturday 07 March 2026 00:42:16 +0000 (0:00:00.258) 0:00:39.133 ******** 2026-03-07 00:42:19.887369 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887373 | orchestrator | 2026-03-07 00:42:19.887377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887381 | orchestrator | Saturday 07 March 2026 00:42:16 +0000 (0:00:00.224) 0:00:39.358 ******** 2026-03-07 00:42:19.887385 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887388 | orchestrator | 2026-03-07 00:42:19.887392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887396 | orchestrator | Saturday 07 March 2026 00:42:16 +0000 (0:00:00.216) 0:00:39.575 ******** 2026-03-07 00:42:19.887400 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887403 | orchestrator | 2026-03-07 00:42:19.887407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887411 | orchestrator | Saturday 07 March 2026 00:42:17 +0000 (0:00:00.206) 0:00:39.781 ******** 2026-03-07 00:42:19.887415 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887418 | orchestrator | 2026-03-07 00:42:19.887422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887426 | orchestrator | Saturday 07 March 2026 00:42:17 +0000 (0:00:00.690) 0:00:40.471 ******** 2026-03-07 00:42:19.887430 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887433 | orchestrator | 2026-03-07 00:42:19.887437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887441 | orchestrator | Saturday 07 March 2026 00:42:18 +0000 (0:00:00.302) 0:00:40.774 ******** 2026-03-07 00:42:19.887445 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887448 | orchestrator | 2026-03-07 00:42:19.887452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887456 | orchestrator | Saturday 07 March 2026 00:42:18 +0000 (0:00:00.220) 0:00:40.994 ******** 2026-03-07 00:42:19.887460 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-07 00:42:19.887468 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-07 00:42:19.887472 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-07 00:42:19.887476 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-07 00:42:19.887479 | orchestrator | 2026-03-07 00:42:19.887483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887487 | orchestrator | Saturday 07 March 2026 00:42:19 +0000 (0:00:00.698) 0:00:41.693 ******** 2026-03-07 00:42:19.887491 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887494 | orchestrator | 2026-03-07 00:42:19.887498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887502 | orchestrator | Saturday 07 March 2026 00:42:19 +0000 (0:00:00.209) 0:00:41.902 ******** 2026-03-07 00:42:19.887506 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887510 | orchestrator | 2026-03-07 00:42:19.887513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887517 | orchestrator | Saturday 07 March 2026 00:42:19 +0000 (0:00:00.221) 0:00:42.124 ******** 2026-03-07 00:42:19.887521 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887525 | orchestrator | 2026-03-07 00:42:19.887528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:19.887532 | orchestrator | Saturday 07 March 2026 00:42:19 +0000 (0:00:00.225) 0:00:42.349 ******** 2026-03-07 00:42:19.887536 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:19.887540 | orchestrator | 2026-03-07 00:42:19.887546 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-07 00:42:24.952991 | orchestrator | Saturday 07 March 2026 00:42:19 +0000 (0:00:00.213) 0:00:42.563 ******** 2026-03-07 00:42:24.953105 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-07 00:42:24.953121 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-07 00:42:24.953133 | orchestrator | 2026-03-07 00:42:24.953147 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-07 00:42:24.953158 | orchestrator | Saturday 07 March 2026 00:42:20 +0000 (0:00:00.188) 0:00:42.751 ******** 2026-03-07 00:42:24.953169 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.953181 | orchestrator | 2026-03-07 00:42:24.953193 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-07 00:42:24.953204 | orchestrator | Saturday 07 March 2026 00:42:20 +0000 (0:00:00.139) 0:00:42.891 ******** 2026-03-07 00:42:24.953235 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.953246 | orchestrator | 2026-03-07 00:42:24.953257 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-07 00:42:24.953268 | orchestrator | Saturday 07 March 2026 00:42:20 +0000 (0:00:00.148) 0:00:43.040 ******** 2026-03-07 00:42:24.953279 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.953290 | orchestrator | 2026-03-07 00:42:24.953316 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-07 00:42:24.953328 | orchestrator | Saturday 07 March 2026 00:42:20 +0000 (0:00:00.368) 0:00:43.409 ******** 2026-03-07 00:42:24.953339 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:42:24.953351 | orchestrator | 2026-03-07 00:42:24.953362 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-07 00:42:24.953373 | orchestrator | Saturday 07 March 2026 00:42:20 +0000 (0:00:00.149) 0:00:43.558 ******** 2026-03-07 00:42:24.953384 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f3e458ba-b75f-5cb4-a1c9-e61fe3486295'}}) 2026-03-07 00:42:24.953401 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5cfbeba1-5550-585b-8a7e-42a4921f8eca'}}) 2026-03-07 00:42:24.953412 | orchestrator | 2026-03-07 00:42:24.953423 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-07 00:42:24.953434 | orchestrator | Saturday 07 March 2026 00:42:21 +0000 (0:00:00.172) 0:00:43.731 ******** 2026-03-07 00:42:24.953445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f3e458ba-b75f-5cb4-a1c9-e61fe3486295'}})  2026-03-07 00:42:24.953479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5cfbeba1-5550-585b-8a7e-42a4921f8eca'}})  2026-03-07 00:42:24.953491 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.953502 | orchestrator | 2026-03-07 00:42:24.953514 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-07 00:42:24.953527 | orchestrator | Saturday 07 March 2026 00:42:21 +0000 (0:00:00.159) 0:00:43.890 ******** 2026-03-07 00:42:24.953539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f3e458ba-b75f-5cb4-a1c9-e61fe3486295'}})  2026-03-07 00:42:24.953552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5cfbeba1-5550-585b-8a7e-42a4921f8eca'}})  2026-03-07 00:42:24.953563 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.953576 | orchestrator | 2026-03-07 00:42:24.953588 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-07 00:42:24.953600 | orchestrator | Saturday 07 March 2026 00:42:21 +0000 (0:00:00.167) 0:00:44.058 ******** 2026-03-07 00:42:24.953613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f3e458ba-b75f-5cb4-a1c9-e61fe3486295'}})  2026-03-07 00:42:24.953625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5cfbeba1-5550-585b-8a7e-42a4921f8eca'}})  2026-03-07 00:42:24.953638 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.953650 | orchestrator | 2026-03-07 00:42:24.953663 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-07 00:42:24.953673 | orchestrator | Saturday 07 March 2026 00:42:21 +0000 (0:00:00.164) 0:00:44.222 ******** 2026-03-07 00:42:24.953684 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:42:24.953694 | orchestrator | 2026-03-07 00:42:24.953705 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-07 00:42:24.953715 | orchestrator | Saturday 07 March 2026 00:42:21 +0000 (0:00:00.159) 0:00:44.382 ******** 2026-03-07 00:42:24.953726 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:42:24.953737 | orchestrator | 2026-03-07 00:42:24.953748 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-07 00:42:24.953758 | orchestrator | Saturday 07 March 2026 00:42:21 +0000 (0:00:00.206) 0:00:44.588 ******** 2026-03-07 00:42:24.953769 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.953834 | orchestrator | 2026-03-07 00:42:24.953845 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-07 00:42:24.953856 | orchestrator | Saturday 07 March 2026 00:42:22 +0000 (0:00:00.149) 0:00:44.738 ******** 2026-03-07 00:42:24.953867 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.953877 | orchestrator | 2026-03-07 00:42:24.953888 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-07 00:42:24.953898 | orchestrator | Saturday 07 March 2026 00:42:22 +0000 (0:00:00.151) 0:00:44.890 ******** 2026-03-07 00:42:24.953909 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.953919 | orchestrator | 2026-03-07 00:42:24.953930 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-07 00:42:24.953941 | orchestrator | Saturday 07 March 2026 00:42:22 +0000 (0:00:00.147) 0:00:45.037 ******** 2026-03-07 00:42:24.953951 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:42:24.953962 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:42:24.953973 | orchestrator |  "sdb": { 2026-03-07 00:42:24.954003 | orchestrator |  "osd_lvm_uuid": "f3e458ba-b75f-5cb4-a1c9-e61fe3486295" 2026-03-07 00:42:24.954076 | orchestrator |  }, 2026-03-07 00:42:24.954090 | orchestrator |  "sdc": { 2026-03-07 00:42:24.954101 | orchestrator |  "osd_lvm_uuid": "5cfbeba1-5550-585b-8a7e-42a4921f8eca" 2026-03-07 00:42:24.954112 | orchestrator |  } 2026-03-07 00:42:24.954123 | orchestrator |  } 2026-03-07 00:42:24.954134 | orchestrator | } 2026-03-07 00:42:24.954145 | orchestrator | 2026-03-07 00:42:24.954164 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-07 00:42:24.954175 | orchestrator | Saturday 07 March 2026 00:42:22 +0000 (0:00:00.169) 0:00:45.207 ******** 2026-03-07 00:42:24.954186 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.954197 | orchestrator | 2026-03-07 00:42:24.954208 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-07 00:42:24.954218 | orchestrator | Saturday 07 March 2026 00:42:22 +0000 (0:00:00.392) 0:00:45.599 ******** 2026-03-07 00:42:24.954229 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.954240 | orchestrator | 2026-03-07 00:42:24.954250 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-07 00:42:24.954261 | orchestrator | Saturday 07 March 2026 00:42:23 +0000 (0:00:00.175) 0:00:45.775 ******** 2026-03-07 00:42:24.954272 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:24.954282 | orchestrator | 2026-03-07 00:42:24.954293 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-07 00:42:24.954304 | orchestrator | Saturday 07 March 2026 00:42:23 +0000 (0:00:00.156) 0:00:45.931 ******** 2026-03-07 00:42:24.954315 | orchestrator | changed: [testbed-node-5] => { 2026-03-07 00:42:24.954326 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-07 00:42:24.954337 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:42:24.954347 | orchestrator |  "sdb": { 2026-03-07 00:42:24.954358 | orchestrator |  "osd_lvm_uuid": "f3e458ba-b75f-5cb4-a1c9-e61fe3486295" 2026-03-07 00:42:24.954369 | orchestrator |  }, 2026-03-07 00:42:24.954380 | orchestrator |  "sdc": { 2026-03-07 00:42:24.954391 | orchestrator |  "osd_lvm_uuid": "5cfbeba1-5550-585b-8a7e-42a4921f8eca" 2026-03-07 00:42:24.954402 | orchestrator |  } 2026-03-07 00:42:24.954413 | orchestrator |  }, 2026-03-07 00:42:24.954424 | orchestrator |  "lvm_volumes": [ 2026-03-07 00:42:24.954435 | orchestrator |  { 2026-03-07 00:42:24.954446 | orchestrator |  "data": "osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295", 2026-03-07 00:42:24.954456 | orchestrator |  "data_vg": "ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295" 2026-03-07 00:42:24.954467 | orchestrator |  }, 2026-03-07 00:42:24.954483 | orchestrator |  { 2026-03-07 00:42:24.954494 | orchestrator |  "data": "osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca", 2026-03-07 00:42:24.954504 | orchestrator |  "data_vg": "ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca" 2026-03-07 00:42:24.954515 | orchestrator |  } 2026-03-07 00:42:24.954526 | orchestrator |  ] 2026-03-07 00:42:24.954537 | orchestrator |  } 2026-03-07 00:42:24.954548 | orchestrator | } 2026-03-07 00:42:24.954559 | orchestrator | 2026-03-07 00:42:24.954570 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-07 00:42:24.954581 | orchestrator | Saturday 07 March 2026 00:42:23 +0000 (0:00:00.240) 0:00:46.172 ******** 2026-03-07 00:42:24.954592 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-07 00:42:24.954602 | orchestrator | 2026-03-07 00:42:24.954613 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:42:24.954624 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 00:42:24.954637 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 00:42:24.954648 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 00:42:24.954658 | orchestrator | 2026-03-07 00:42:24.954669 | orchestrator | 2026-03-07 00:42:24.954680 | orchestrator | 2026-03-07 00:42:24.954691 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:42:24.954702 | orchestrator | Saturday 07 March 2026 00:42:24 +0000 (0:00:01.429) 0:00:47.602 ******** 2026-03-07 00:42:24.954719 | orchestrator | =============================================================================== 2026-03-07 00:42:24.954730 | orchestrator | Write configuration file ------------------------------------------------ 4.47s 2026-03-07 00:42:24.954741 | orchestrator | Add known links to the list of available block devices ------------------ 1.59s 2026-03-07 00:42:24.954759 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.29s 2026-03-07 00:42:24.954770 | orchestrator | Add known partitions to the list of available block devices ------------- 1.26s 2026-03-07 00:42:24.954813 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2026-03-07 00:42:24.954830 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2026-03-07 00:42:24.954849 | orchestrator | Get initial list of available block devices ----------------------------- 1.08s 2026-03-07 00:42:24.954865 | orchestrator | Add known links to the list of available block devices ------------------ 1.05s 2026-03-07 00:42:24.954884 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2026-03-07 00:42:24.954901 | orchestrator | Print configuration data ------------------------------------------------ 0.91s 2026-03-07 00:42:24.954917 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2026-03-07 00:42:24.954928 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-03-07 00:42:24.954939 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.75s 2026-03-07 00:42:24.954959 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-03-07 00:42:25.444712 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-03-07 00:42:25.444852 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-03-07 00:42:25.444866 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-03-07 00:42:25.444874 | orchestrator | Print WAL devices ------------------------------------------------------- 0.68s 2026-03-07 00:42:25.444882 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-07 00:42:25.444890 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.64s 2026-03-07 00:42:48.365472 | orchestrator | 2026-03-07 00:42:48 | INFO  | Task 80934921-921f-410f-ab0d-2bb8d4911362 (sync inventory) is running in background. Output coming soon. 2026-03-07 00:43:20.864774 | orchestrator | 2026-03-07 00:42:50 | INFO  | Starting group_vars file reorganization 2026-03-07 00:43:20.864941 | orchestrator | 2026-03-07 00:42:50 | INFO  | Moved 0 file(s) to their respective directories 2026-03-07 00:43:20.864959 | orchestrator | 2026-03-07 00:42:50 | INFO  | Group_vars file reorganization completed 2026-03-07 00:43:20.864971 | orchestrator | 2026-03-07 00:42:53 | INFO  | Starting variable preparation from inventory 2026-03-07 00:43:20.864983 | orchestrator | 2026-03-07 00:42:57 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-07 00:43:20.864994 | orchestrator | 2026-03-07 00:42:57 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-07 00:43:20.865021 | orchestrator | 2026-03-07 00:42:57 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-07 00:43:20.865033 | orchestrator | 2026-03-07 00:42:57 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-07 00:43:20.865045 | orchestrator | 2026-03-07 00:42:57 | INFO  | Variable preparation completed 2026-03-07 00:43:20.865056 | orchestrator | 2026-03-07 00:42:59 | INFO  | Starting inventory overwrite handling 2026-03-07 00:43:20.865067 | orchestrator | 2026-03-07 00:42:59 | INFO  | Handling group overwrites in 99-overwrite 2026-03-07 00:43:20.865077 | orchestrator | 2026-03-07 00:42:59 | INFO  | Removing group frr:children from 60-generic 2026-03-07 00:43:20.865109 | orchestrator | 2026-03-07 00:42:59 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-07 00:43:20.865120 | orchestrator | 2026-03-07 00:42:59 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-07 00:43:20.865132 | orchestrator | 2026-03-07 00:42:59 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-07 00:43:20.865161 | orchestrator | 2026-03-07 00:42:59 | INFO  | Handling group overwrites in 20-roles 2026-03-07 00:43:20.865262 | orchestrator | 2026-03-07 00:42:59 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-07 00:43:20.865291 | orchestrator | 2026-03-07 00:42:59 | INFO  | Removed 5 group(s) in total 2026-03-07 00:43:20.865309 | orchestrator | 2026-03-07 00:42:59 | INFO  | Inventory overwrite handling completed 2026-03-07 00:43:20.865329 | orchestrator | 2026-03-07 00:43:00 | INFO  | Starting merge of inventory files 2026-03-07 00:43:20.865349 | orchestrator | 2026-03-07 00:43:00 | INFO  | Inventory files merged successfully 2026-03-07 00:43:20.865368 | orchestrator | 2026-03-07 00:43:06 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-07 00:43:20.865386 | orchestrator | 2026-03-07 00:43:19 | INFO  | Successfully wrote ClusterShell configuration 2026-03-07 00:43:20.865399 | orchestrator | [master 9533bf1] 2026-03-07-00-43 2026-03-07 00:43:20.865413 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-07 00:43:23.001618 | orchestrator | 2026-03-07 00:43:23 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-07 00:43:23.057411 | orchestrator | 2026-03-07 00:43:23 | INFO  | Task b5a61278-80bc-4f93-bbc0-0d0b51b06de1 (ceph-create-lvm-devices) was prepared for execution. 2026-03-07 00:43:23.057512 | orchestrator | 2026-03-07 00:43:23 | INFO  | It takes a moment until task b5a61278-80bc-4f93-bbc0-0d0b51b06de1 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-07 00:43:37.159167 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-07 00:43:37.159267 | orchestrator | 2.16.14 2026-03-07 00:43:37.159281 | orchestrator | 2026-03-07 00:43:37.159384 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-07 00:43:37.159396 | orchestrator | 2026-03-07 00:43:37.159404 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:43:37.159413 | orchestrator | Saturday 07 March 2026 00:43:28 +0000 (0:00:00.356) 0:00:00.356 ******** 2026-03-07 00:43:37.159422 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-07 00:43:37.159430 | orchestrator | 2026-03-07 00:43:37.159439 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:43:37.159447 | orchestrator | Saturday 07 March 2026 00:43:29 +0000 (0:00:00.254) 0:00:00.611 ******** 2026-03-07 00:43:37.159455 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:43:37.159463 | orchestrator | 2026-03-07 00:43:37.159471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.159478 | orchestrator | Saturday 07 March 2026 00:43:29 +0000 (0:00:00.274) 0:00:00.885 ******** 2026-03-07 00:43:37.159486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-07 00:43:37.159494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-07 00:43:37.159502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-07 00:43:37.159510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-07 00:43:37.159517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-07 00:43:37.159525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-07 00:43:37.159532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-07 00:43:37.159565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-07 00:43:37.159574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-07 00:43:37.159581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-07 00:43:37.159589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-07 00:43:37.159597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-07 00:43:37.159605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-07 00:43:37.159612 | orchestrator | 2026-03-07 00:43:37.159620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.159628 | orchestrator | Saturday 07 March 2026 00:43:29 +0000 (0:00:00.577) 0:00:01.463 ******** 2026-03-07 00:43:37.159636 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.159644 | orchestrator | 2026-03-07 00:43:37.159652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.159659 | orchestrator | Saturday 07 March 2026 00:43:30 +0000 (0:00:00.211) 0:00:01.675 ******** 2026-03-07 00:43:37.159667 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.159675 | orchestrator | 2026-03-07 00:43:37.159683 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.159690 | orchestrator | Saturday 07 March 2026 00:43:30 +0000 (0:00:00.216) 0:00:01.891 ******** 2026-03-07 00:43:37.159698 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.159706 | orchestrator | 2026-03-07 00:43:37.159715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.159724 | orchestrator | Saturday 07 March 2026 00:43:30 +0000 (0:00:00.236) 0:00:02.128 ******** 2026-03-07 00:43:37.159733 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.159741 | orchestrator | 2026-03-07 00:43:37.159750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.159759 | orchestrator | Saturday 07 March 2026 00:43:30 +0000 (0:00:00.219) 0:00:02.348 ******** 2026-03-07 00:43:37.159767 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.159776 | orchestrator | 2026-03-07 00:43:37.159785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.159811 | orchestrator | Saturday 07 March 2026 00:43:31 +0000 (0:00:00.217) 0:00:02.565 ******** 2026-03-07 00:43:37.159821 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.159830 | orchestrator | 2026-03-07 00:43:37.159840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.159848 | orchestrator | Saturday 07 March 2026 00:43:31 +0000 (0:00:00.219) 0:00:02.784 ******** 2026-03-07 00:43:37.159857 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.159866 | orchestrator | 2026-03-07 00:43:37.159875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.159884 | orchestrator | Saturday 07 March 2026 00:43:31 +0000 (0:00:00.230) 0:00:03.015 ******** 2026-03-07 00:43:37.159893 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.159903 | orchestrator | 2026-03-07 00:43:37.159912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.159921 | orchestrator | Saturday 07 March 2026 00:43:31 +0000 (0:00:00.235) 0:00:03.250 ******** 2026-03-07 00:43:37.159930 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e) 2026-03-07 00:43:37.159940 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e) 2026-03-07 00:43:37.159950 | orchestrator | 2026-03-07 00:43:37.159959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.159982 | orchestrator | Saturday 07 March 2026 00:43:32 +0000 (0:00:00.403) 0:00:03.654 ******** 2026-03-07 00:43:37.159998 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e) 2026-03-07 00:43:37.160008 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e) 2026-03-07 00:43:37.160016 | orchestrator | 2026-03-07 00:43:37.160025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.160034 | orchestrator | Saturday 07 March 2026 00:43:32 +0000 (0:00:00.706) 0:00:04.361 ******** 2026-03-07 00:43:37.160043 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd) 2026-03-07 00:43:37.160052 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd) 2026-03-07 00:43:37.160061 | orchestrator | 2026-03-07 00:43:37.160069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.160077 | orchestrator | Saturday 07 March 2026 00:43:33 +0000 (0:00:00.708) 0:00:05.069 ******** 2026-03-07 00:43:37.160085 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da) 2026-03-07 00:43:37.160093 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da) 2026-03-07 00:43:37.160101 | orchestrator | 2026-03-07 00:43:37.160108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:37.160116 | orchestrator | Saturday 07 March 2026 00:43:34 +0000 (0:00:00.940) 0:00:06.009 ******** 2026-03-07 00:43:37.160124 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:43:37.160132 | orchestrator | 2026-03-07 00:43:37.160140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:37.160147 | orchestrator | Saturday 07 March 2026 00:43:34 +0000 (0:00:00.426) 0:00:06.436 ******** 2026-03-07 00:43:37.160155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-07 00:43:37.160163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-07 00:43:37.160170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-07 00:43:37.160178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-07 00:43:37.160186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-07 00:43:37.160198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-07 00:43:37.160206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-07 00:43:37.160214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-07 00:43:37.160222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-07 00:43:37.160229 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-07 00:43:37.160237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-07 00:43:37.160245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-07 00:43:37.160252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-07 00:43:37.160260 | orchestrator | 2026-03-07 00:43:37.160268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:37.160275 | orchestrator | Saturday 07 March 2026 00:43:35 +0000 (0:00:00.499) 0:00:06.935 ******** 2026-03-07 00:43:37.160283 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.160307 | orchestrator | 2026-03-07 00:43:37.160316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:37.160323 | orchestrator | Saturday 07 March 2026 00:43:35 +0000 (0:00:00.236) 0:00:07.172 ******** 2026-03-07 00:43:37.160337 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.160345 | orchestrator | 2026-03-07 00:43:37.160353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:37.160361 | orchestrator | Saturday 07 March 2026 00:43:35 +0000 (0:00:00.241) 0:00:07.414 ******** 2026-03-07 00:43:37.160368 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.160376 | orchestrator | 2026-03-07 00:43:37.160384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:37.160392 | orchestrator | Saturday 07 March 2026 00:43:36 +0000 (0:00:00.244) 0:00:07.658 ******** 2026-03-07 00:43:37.160400 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.160408 | orchestrator | 2026-03-07 00:43:37.160415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:37.160423 | orchestrator | Saturday 07 March 2026 00:43:36 +0000 (0:00:00.249) 0:00:07.908 ******** 2026-03-07 00:43:37.160431 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.160439 | orchestrator | 2026-03-07 00:43:37.160447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:37.160455 | orchestrator | Saturday 07 March 2026 00:43:36 +0000 (0:00:00.227) 0:00:08.136 ******** 2026-03-07 00:43:37.160462 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.160470 | orchestrator | 2026-03-07 00:43:37.160478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:37.160486 | orchestrator | Saturday 07 March 2026 00:43:36 +0000 (0:00:00.231) 0:00:08.367 ******** 2026-03-07 00:43:37.160494 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:37.160502 | orchestrator | 2026-03-07 00:43:37.160514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:45.800909 | orchestrator | Saturday 07 March 2026 00:43:37 +0000 (0:00:00.287) 0:00:08.655 ******** 2026-03-07 00:43:45.801029 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.801045 | orchestrator | 2026-03-07 00:43:45.801057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:45.801067 | orchestrator | Saturday 07 March 2026 00:43:37 +0000 (0:00:00.253) 0:00:08.908 ******** 2026-03-07 00:43:45.801076 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-07 00:43:45.801086 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-07 00:43:45.801095 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-07 00:43:45.801105 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-07 00:43:45.801163 | orchestrator | 2026-03-07 00:43:45.801174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:45.801183 | orchestrator | Saturday 07 March 2026 00:43:38 +0000 (0:00:01.210) 0:00:10.118 ******** 2026-03-07 00:43:45.801192 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.801202 | orchestrator | 2026-03-07 00:43:45.801211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:45.801220 | orchestrator | Saturday 07 March 2026 00:43:38 +0000 (0:00:00.231) 0:00:10.350 ******** 2026-03-07 00:43:45.801235 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.801249 | orchestrator | 2026-03-07 00:43:45.801265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:45.801279 | orchestrator | Saturday 07 March 2026 00:43:39 +0000 (0:00:00.214) 0:00:10.565 ******** 2026-03-07 00:43:45.801293 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.801307 | orchestrator | 2026-03-07 00:43:45.801320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:45.801334 | orchestrator | Saturday 07 March 2026 00:43:39 +0000 (0:00:00.216) 0:00:10.782 ******** 2026-03-07 00:43:45.801406 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.801425 | orchestrator | 2026-03-07 00:43:45.801440 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-07 00:43:45.801455 | orchestrator | Saturday 07 March 2026 00:43:39 +0000 (0:00:00.214) 0:00:10.997 ******** 2026-03-07 00:43:45.801471 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.801515 | orchestrator | 2026-03-07 00:43:45.801531 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-07 00:43:45.801547 | orchestrator | Saturday 07 March 2026 00:43:39 +0000 (0:00:00.150) 0:00:11.147 ******** 2026-03-07 00:43:45.801565 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e9f941f3-03bb-56ef-8ac7-c30bc8004c51'}}) 2026-03-07 00:43:45.801582 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6cee2ec4-9e84-549b-8075-e81043ce518c'}}) 2026-03-07 00:43:45.801597 | orchestrator | 2026-03-07 00:43:45.801612 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-07 00:43:45.801627 | orchestrator | Saturday 07 March 2026 00:43:39 +0000 (0:00:00.197) 0:00:11.344 ******** 2026-03-07 00:43:45.801643 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'}) 2026-03-07 00:43:45.801661 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'}) 2026-03-07 00:43:45.801677 | orchestrator | 2026-03-07 00:43:45.801695 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-07 00:43:45.801711 | orchestrator | Saturday 07 March 2026 00:43:41 +0000 (0:00:02.043) 0:00:13.388 ******** 2026-03-07 00:43:45.801727 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:45.801743 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:45.801759 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.801773 | orchestrator | 2026-03-07 00:43:45.801789 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-07 00:43:45.801805 | orchestrator | Saturday 07 March 2026 00:43:42 +0000 (0:00:00.167) 0:00:13.555 ******** 2026-03-07 00:43:45.801820 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'}) 2026-03-07 00:43:45.801835 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'}) 2026-03-07 00:43:45.801850 | orchestrator | 2026-03-07 00:43:45.801883 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-07 00:43:45.801898 | orchestrator | Saturday 07 March 2026 00:43:43 +0000 (0:00:01.484) 0:00:15.039 ******** 2026-03-07 00:43:45.801913 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:45.801928 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:45.801943 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.801958 | orchestrator | 2026-03-07 00:43:45.801972 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-07 00:43:45.801987 | orchestrator | Saturday 07 March 2026 00:43:43 +0000 (0:00:00.176) 0:00:15.216 ******** 2026-03-07 00:43:45.802089 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.802110 | orchestrator | 2026-03-07 00:43:45.802125 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-07 00:43:45.802139 | orchestrator | Saturday 07 March 2026 00:43:43 +0000 (0:00:00.171) 0:00:15.387 ******** 2026-03-07 00:43:45.802154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:45.802168 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:45.802195 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.802219 | orchestrator | 2026-03-07 00:43:45.802234 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-07 00:43:45.802248 | orchestrator | Saturday 07 March 2026 00:43:44 +0000 (0:00:00.396) 0:00:15.784 ******** 2026-03-07 00:43:45.802263 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.802276 | orchestrator | 2026-03-07 00:43:45.802291 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-07 00:43:45.802305 | orchestrator | Saturday 07 March 2026 00:43:44 +0000 (0:00:00.161) 0:00:15.945 ******** 2026-03-07 00:43:45.802321 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:45.802336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:45.802370 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.802386 | orchestrator | 2026-03-07 00:43:45.802402 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-07 00:43:45.802417 | orchestrator | Saturday 07 March 2026 00:43:44 +0000 (0:00:00.192) 0:00:16.137 ******** 2026-03-07 00:43:45.802433 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.802448 | orchestrator | 2026-03-07 00:43:45.802464 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-07 00:43:45.802479 | orchestrator | Saturday 07 March 2026 00:43:44 +0000 (0:00:00.146) 0:00:16.283 ******** 2026-03-07 00:43:45.802495 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:45.802519 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:45.802536 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.802552 | orchestrator | 2026-03-07 00:43:45.802567 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-07 00:43:45.802583 | orchestrator | Saturday 07 March 2026 00:43:44 +0000 (0:00:00.186) 0:00:16.470 ******** 2026-03-07 00:43:45.802598 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:43:45.802615 | orchestrator | 2026-03-07 00:43:45.802631 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-07 00:43:45.802647 | orchestrator | Saturday 07 March 2026 00:43:45 +0000 (0:00:00.161) 0:00:16.632 ******** 2026-03-07 00:43:45.802662 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:45.802678 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:45.802694 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.802709 | orchestrator | 2026-03-07 00:43:45.802725 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-07 00:43:45.802740 | orchestrator | Saturday 07 March 2026 00:43:45 +0000 (0:00:00.165) 0:00:16.797 ******** 2026-03-07 00:43:45.802755 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:45.802771 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:45.802787 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.802802 | orchestrator | 2026-03-07 00:43:45.802818 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-07 00:43:45.802845 | orchestrator | Saturday 07 March 2026 00:43:45 +0000 (0:00:00.155) 0:00:16.953 ******** 2026-03-07 00:43:45.802860 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:45.802876 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:45.802892 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.802906 | orchestrator | 2026-03-07 00:43:45.802922 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-07 00:43:45.802939 | orchestrator | Saturday 07 March 2026 00:43:45 +0000 (0:00:00.176) 0:00:17.129 ******** 2026-03-07 00:43:45.802954 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:45.802969 | orchestrator | 2026-03-07 00:43:45.802985 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-07 00:43:45.803015 | orchestrator | Saturday 07 March 2026 00:43:45 +0000 (0:00:00.168) 0:00:17.298 ******** 2026-03-07 00:43:52.461963 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.462146 | orchestrator | 2026-03-07 00:43:52.462166 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-07 00:43:52.462180 | orchestrator | Saturday 07 March 2026 00:43:45 +0000 (0:00:00.154) 0:00:17.453 ******** 2026-03-07 00:43:52.462191 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.462202 | orchestrator | 2026-03-07 00:43:52.462214 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-07 00:43:52.462226 | orchestrator | Saturday 07 March 2026 00:43:46 +0000 (0:00:00.140) 0:00:17.593 ******** 2026-03-07 00:43:52.462237 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:43:52.462249 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-07 00:43:52.462260 | orchestrator | } 2026-03-07 00:43:52.462271 | orchestrator | 2026-03-07 00:43:52.462282 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-07 00:43:52.462293 | orchestrator | Saturday 07 March 2026 00:43:46 +0000 (0:00:00.362) 0:00:17.956 ******** 2026-03-07 00:43:52.462304 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:43:52.462315 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-07 00:43:52.462326 | orchestrator | } 2026-03-07 00:43:52.462337 | orchestrator | 2026-03-07 00:43:52.462348 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-07 00:43:52.462359 | orchestrator | Saturday 07 March 2026 00:43:46 +0000 (0:00:00.138) 0:00:18.094 ******** 2026-03-07 00:43:52.462370 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:43:52.462381 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-07 00:43:52.462392 | orchestrator | } 2026-03-07 00:43:52.462461 | orchestrator | 2026-03-07 00:43:52.462472 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-07 00:43:52.462483 | orchestrator | Saturday 07 March 2026 00:43:46 +0000 (0:00:00.146) 0:00:18.240 ******** 2026-03-07 00:43:52.462494 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:43:52.462504 | orchestrator | 2026-03-07 00:43:52.462515 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-07 00:43:52.462526 | orchestrator | Saturday 07 March 2026 00:43:47 +0000 (0:00:00.686) 0:00:18.926 ******** 2026-03-07 00:43:52.462537 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:43:52.462548 | orchestrator | 2026-03-07 00:43:52.462558 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-07 00:43:52.462569 | orchestrator | Saturday 07 March 2026 00:43:48 +0000 (0:00:00.606) 0:00:19.533 ******** 2026-03-07 00:43:52.462580 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:43:52.462591 | orchestrator | 2026-03-07 00:43:52.462602 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-07 00:43:52.462612 | orchestrator | Saturday 07 March 2026 00:43:48 +0000 (0:00:00.531) 0:00:20.065 ******** 2026-03-07 00:43:52.462623 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:43:52.462634 | orchestrator | 2026-03-07 00:43:52.462673 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-07 00:43:52.462685 | orchestrator | Saturday 07 March 2026 00:43:48 +0000 (0:00:00.169) 0:00:20.234 ******** 2026-03-07 00:43:52.462696 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.462706 | orchestrator | 2026-03-07 00:43:52.462717 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-07 00:43:52.462728 | orchestrator | Saturday 07 March 2026 00:43:48 +0000 (0:00:00.122) 0:00:20.356 ******** 2026-03-07 00:43:52.462739 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.462749 | orchestrator | 2026-03-07 00:43:52.462760 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-07 00:43:52.462771 | orchestrator | Saturday 07 March 2026 00:43:48 +0000 (0:00:00.108) 0:00:20.465 ******** 2026-03-07 00:43:52.462781 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:43:52.462792 | orchestrator |  "vgs_report": { 2026-03-07 00:43:52.462803 | orchestrator |  "vg": [] 2026-03-07 00:43:52.462814 | orchestrator |  } 2026-03-07 00:43:52.462825 | orchestrator | } 2026-03-07 00:43:52.462836 | orchestrator | 2026-03-07 00:43:52.462846 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-07 00:43:52.462857 | orchestrator | Saturday 07 March 2026 00:43:49 +0000 (0:00:00.159) 0:00:20.625 ******** 2026-03-07 00:43:52.462868 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.462878 | orchestrator | 2026-03-07 00:43:52.462889 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-07 00:43:52.462900 | orchestrator | Saturday 07 March 2026 00:43:49 +0000 (0:00:00.135) 0:00:20.760 ******** 2026-03-07 00:43:52.462910 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.462921 | orchestrator | 2026-03-07 00:43:52.462932 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-07 00:43:52.462943 | orchestrator | Saturday 07 March 2026 00:43:49 +0000 (0:00:00.150) 0:00:20.911 ******** 2026-03-07 00:43:52.462953 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.462964 | orchestrator | 2026-03-07 00:43:52.462975 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-07 00:43:52.462986 | orchestrator | Saturday 07 March 2026 00:43:49 +0000 (0:00:00.348) 0:00:21.260 ******** 2026-03-07 00:43:52.462997 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463007 | orchestrator | 2026-03-07 00:43:52.463018 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-07 00:43:52.463029 | orchestrator | Saturday 07 March 2026 00:43:49 +0000 (0:00:00.146) 0:00:21.406 ******** 2026-03-07 00:43:52.463039 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463050 | orchestrator | 2026-03-07 00:43:52.463060 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-07 00:43:52.463071 | orchestrator | Saturday 07 March 2026 00:43:50 +0000 (0:00:00.137) 0:00:21.544 ******** 2026-03-07 00:43:52.463082 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463092 | orchestrator | 2026-03-07 00:43:52.463103 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-07 00:43:52.463114 | orchestrator | Saturday 07 March 2026 00:43:50 +0000 (0:00:00.133) 0:00:21.677 ******** 2026-03-07 00:43:52.463125 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463135 | orchestrator | 2026-03-07 00:43:52.463146 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-07 00:43:52.463157 | orchestrator | Saturday 07 March 2026 00:43:50 +0000 (0:00:00.178) 0:00:21.856 ******** 2026-03-07 00:43:52.463188 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463199 | orchestrator | 2026-03-07 00:43:52.463210 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-07 00:43:52.463221 | orchestrator | Saturday 07 March 2026 00:43:50 +0000 (0:00:00.152) 0:00:22.008 ******** 2026-03-07 00:43:52.463232 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463242 | orchestrator | 2026-03-07 00:43:52.463253 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-07 00:43:52.463272 | orchestrator | Saturday 07 March 2026 00:43:50 +0000 (0:00:00.138) 0:00:22.146 ******** 2026-03-07 00:43:52.463283 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463294 | orchestrator | 2026-03-07 00:43:52.463305 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-07 00:43:52.463315 | orchestrator | Saturday 07 March 2026 00:43:50 +0000 (0:00:00.150) 0:00:22.297 ******** 2026-03-07 00:43:52.463326 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463337 | orchestrator | 2026-03-07 00:43:52.463366 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-07 00:43:52.463377 | orchestrator | Saturday 07 March 2026 00:43:50 +0000 (0:00:00.134) 0:00:22.432 ******** 2026-03-07 00:43:52.463388 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463423 | orchestrator | 2026-03-07 00:43:52.463442 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-07 00:43:52.463462 | orchestrator | Saturday 07 March 2026 00:43:51 +0000 (0:00:00.143) 0:00:22.575 ******** 2026-03-07 00:43:52.463482 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463502 | orchestrator | 2026-03-07 00:43:52.463521 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-07 00:43:52.463540 | orchestrator | Saturday 07 March 2026 00:43:51 +0000 (0:00:00.142) 0:00:22.718 ******** 2026-03-07 00:43:52.463567 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463589 | orchestrator | 2026-03-07 00:43:52.463607 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-07 00:43:52.463625 | orchestrator | Saturday 07 March 2026 00:43:51 +0000 (0:00:00.140) 0:00:22.858 ******** 2026-03-07 00:43:52.463645 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:52.463666 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:52.463684 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463703 | orchestrator | 2026-03-07 00:43:52.463721 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-07 00:43:52.463749 | orchestrator | Saturday 07 March 2026 00:43:51 +0000 (0:00:00.372) 0:00:23.231 ******** 2026-03-07 00:43:52.463761 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:52.463772 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:52.463783 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463796 | orchestrator | 2026-03-07 00:43:52.463815 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-07 00:43:52.463832 | orchestrator | Saturday 07 March 2026 00:43:51 +0000 (0:00:00.171) 0:00:23.403 ******** 2026-03-07 00:43:52.463849 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:52.463868 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:52.463886 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463902 | orchestrator | 2026-03-07 00:43:52.463913 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-07 00:43:52.463923 | orchestrator | Saturday 07 March 2026 00:43:52 +0000 (0:00:00.160) 0:00:23.563 ******** 2026-03-07 00:43:52.463934 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:52.463945 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:52.463966 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.463977 | orchestrator | 2026-03-07 00:43:52.463988 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-07 00:43:52.463998 | orchestrator | Saturday 07 March 2026 00:43:52 +0000 (0:00:00.171) 0:00:23.734 ******** 2026-03-07 00:43:52.464009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:52.464020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:52.464031 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:52.464041 | orchestrator | 2026-03-07 00:43:52.464052 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-07 00:43:52.464063 | orchestrator | Saturday 07 March 2026 00:43:52 +0000 (0:00:00.149) 0:00:23.884 ******** 2026-03-07 00:43:52.464085 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:58.352716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:58.352834 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:58.352851 | orchestrator | 2026-03-07 00:43:58.352865 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-07 00:43:58.352877 | orchestrator | Saturday 07 March 2026 00:43:52 +0000 (0:00:00.162) 0:00:24.047 ******** 2026-03-07 00:43:58.352889 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:58.352901 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:58.352912 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:58.352923 | orchestrator | 2026-03-07 00:43:58.352934 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-07 00:43:58.352944 | orchestrator | Saturday 07 March 2026 00:43:52 +0000 (0:00:00.170) 0:00:24.218 ******** 2026-03-07 00:43:58.352957 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:58.352976 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:58.352995 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:58.353014 | orchestrator | 2026-03-07 00:43:58.353033 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-07 00:43:58.353054 | orchestrator | Saturday 07 March 2026 00:43:52 +0000 (0:00:00.193) 0:00:24.411 ******** 2026-03-07 00:43:58.353075 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:43:58.353088 | orchestrator | 2026-03-07 00:43:58.353100 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-07 00:43:58.353111 | orchestrator | Saturday 07 March 2026 00:43:53 +0000 (0:00:00.543) 0:00:24.954 ******** 2026-03-07 00:43:58.353121 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:43:58.353132 | orchestrator | 2026-03-07 00:43:58.353143 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-07 00:43:58.353173 | orchestrator | Saturday 07 March 2026 00:43:53 +0000 (0:00:00.539) 0:00:25.494 ******** 2026-03-07 00:43:58.353184 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:43:58.353195 | orchestrator | 2026-03-07 00:43:58.353206 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-07 00:43:58.353217 | orchestrator | Saturday 07 March 2026 00:43:54 +0000 (0:00:00.171) 0:00:25.665 ******** 2026-03-07 00:43:58.353255 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'vg_name': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'}) 2026-03-07 00:43:58.353270 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'vg_name': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'}) 2026-03-07 00:43:58.353283 | orchestrator | 2026-03-07 00:43:58.353296 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-07 00:43:58.353309 | orchestrator | Saturday 07 March 2026 00:43:54 +0000 (0:00:00.198) 0:00:25.864 ******** 2026-03-07 00:43:58.353322 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:58.353335 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:58.353347 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:58.353360 | orchestrator | 2026-03-07 00:43:58.353373 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-07 00:43:58.353386 | orchestrator | Saturday 07 March 2026 00:43:54 +0000 (0:00:00.490) 0:00:26.355 ******** 2026-03-07 00:43:58.353398 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:58.353411 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:58.353424 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:58.353468 | orchestrator | 2026-03-07 00:43:58.353486 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-07 00:43:58.353499 | orchestrator | Saturday 07 March 2026 00:43:55 +0000 (0:00:00.176) 0:00:26.531 ******** 2026-03-07 00:43:58.353511 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'})  2026-03-07 00:43:58.353524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'})  2026-03-07 00:43:58.353537 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:43:58.353550 | orchestrator | 2026-03-07 00:43:58.353562 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-07 00:43:58.353573 | orchestrator | Saturday 07 March 2026 00:43:55 +0000 (0:00:00.176) 0:00:26.707 ******** 2026-03-07 00:43:58.353609 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:43:58.353629 | orchestrator |  "lvm_report": { 2026-03-07 00:43:58.353650 | orchestrator |  "lv": [ 2026-03-07 00:43:58.353662 | orchestrator |  { 2026-03-07 00:43:58.353673 | orchestrator |  "lv_name": "osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c", 2026-03-07 00:43:58.353685 | orchestrator |  "vg_name": "ceph-6cee2ec4-9e84-549b-8075-e81043ce518c" 2026-03-07 00:43:58.353696 | orchestrator |  }, 2026-03-07 00:43:58.353707 | orchestrator |  { 2026-03-07 00:43:58.353717 | orchestrator |  "lv_name": "osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51", 2026-03-07 00:43:58.353728 | orchestrator |  "vg_name": "ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51" 2026-03-07 00:43:58.353739 | orchestrator |  } 2026-03-07 00:43:58.353750 | orchestrator |  ], 2026-03-07 00:43:58.353761 | orchestrator |  "pv": [ 2026-03-07 00:43:58.353771 | orchestrator |  { 2026-03-07 00:43:58.353782 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-07 00:43:58.353795 | orchestrator |  "vg_name": "ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51" 2026-03-07 00:43:58.353813 | orchestrator |  }, 2026-03-07 00:43:58.353824 | orchestrator |  { 2026-03-07 00:43:58.353844 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-07 00:43:58.353855 | orchestrator |  "vg_name": "ceph-6cee2ec4-9e84-549b-8075-e81043ce518c" 2026-03-07 00:43:58.353865 | orchestrator |  } 2026-03-07 00:43:58.353877 | orchestrator |  ] 2026-03-07 00:43:58.353887 | orchestrator |  } 2026-03-07 00:43:58.353898 | orchestrator | } 2026-03-07 00:43:58.353909 | orchestrator | 2026-03-07 00:43:58.353920 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-07 00:43:58.353931 | orchestrator | 2026-03-07 00:43:58.353942 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:43:58.353953 | orchestrator | Saturday 07 March 2026 00:43:55 +0000 (0:00:00.328) 0:00:27.036 ******** 2026-03-07 00:43:58.353964 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-07 00:43:58.353975 | orchestrator | 2026-03-07 00:43:58.353986 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:43:58.353996 | orchestrator | Saturday 07 March 2026 00:43:55 +0000 (0:00:00.256) 0:00:27.293 ******** 2026-03-07 00:43:58.354007 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:43:58.354083 | orchestrator | 2026-03-07 00:43:58.354096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:58.354107 | orchestrator | Saturday 07 March 2026 00:43:56 +0000 (0:00:00.246) 0:00:27.539 ******** 2026-03-07 00:43:58.354118 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-07 00:43:58.354130 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-07 00:43:58.354141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-07 00:43:58.354152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-07 00:43:58.354163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-07 00:43:58.354174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-07 00:43:58.354184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-07 00:43:58.354195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-07 00:43:58.354206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-07 00:43:58.354217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-07 00:43:58.354227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-07 00:43:58.354238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-07 00:43:58.354249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-07 00:43:58.354260 | orchestrator | 2026-03-07 00:43:58.354271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:58.354281 | orchestrator | Saturday 07 March 2026 00:43:56 +0000 (0:00:00.460) 0:00:28.000 ******** 2026-03-07 00:43:58.354292 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:43:58.354303 | orchestrator | 2026-03-07 00:43:58.354314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:58.354336 | orchestrator | Saturday 07 March 2026 00:43:56 +0000 (0:00:00.226) 0:00:28.226 ******** 2026-03-07 00:43:58.354348 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:43:58.354359 | orchestrator | 2026-03-07 00:43:58.354370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:58.354381 | orchestrator | Saturday 07 March 2026 00:43:56 +0000 (0:00:00.199) 0:00:28.425 ******** 2026-03-07 00:43:58.354392 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:43:58.354403 | orchestrator | 2026-03-07 00:43:58.354414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:58.354432 | orchestrator | Saturday 07 March 2026 00:43:57 +0000 (0:00:00.713) 0:00:29.139 ******** 2026-03-07 00:43:58.354503 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:43:58.354514 | orchestrator | 2026-03-07 00:43:58.354525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:58.354536 | orchestrator | Saturday 07 March 2026 00:43:57 +0000 (0:00:00.233) 0:00:29.373 ******** 2026-03-07 00:43:58.354547 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:43:58.354558 | orchestrator | 2026-03-07 00:43:58.354569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:58.354579 | orchestrator | Saturday 07 March 2026 00:43:58 +0000 (0:00:00.237) 0:00:29.611 ******** 2026-03-07 00:43:58.354590 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:43:58.354601 | orchestrator | 2026-03-07 00:43:58.354623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:10.622892 | orchestrator | Saturday 07 March 2026 00:43:58 +0000 (0:00:00.238) 0:00:29.849 ******** 2026-03-07 00:44:10.623001 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.623016 | orchestrator | 2026-03-07 00:44:10.623029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:10.623040 | orchestrator | Saturday 07 March 2026 00:43:58 +0000 (0:00:00.244) 0:00:30.093 ******** 2026-03-07 00:44:10.623050 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.623061 | orchestrator | 2026-03-07 00:44:10.623073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:10.623084 | orchestrator | Saturday 07 March 2026 00:43:58 +0000 (0:00:00.225) 0:00:30.319 ******** 2026-03-07 00:44:10.623095 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6) 2026-03-07 00:44:10.623107 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6) 2026-03-07 00:44:10.623118 | orchestrator | 2026-03-07 00:44:10.623130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:10.623141 | orchestrator | Saturday 07 March 2026 00:43:59 +0000 (0:00:00.463) 0:00:30.782 ******** 2026-03-07 00:44:10.623152 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15) 2026-03-07 00:44:10.623164 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15) 2026-03-07 00:44:10.623175 | orchestrator | 2026-03-07 00:44:10.623187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:10.623198 | orchestrator | Saturday 07 March 2026 00:43:59 +0000 (0:00:00.455) 0:00:31.238 ******** 2026-03-07 00:44:10.623209 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359) 2026-03-07 00:44:10.623221 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359) 2026-03-07 00:44:10.623232 | orchestrator | 2026-03-07 00:44:10.623243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:10.623255 | orchestrator | Saturday 07 March 2026 00:44:00 +0000 (0:00:00.535) 0:00:31.773 ******** 2026-03-07 00:44:10.623283 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e) 2026-03-07 00:44:10.623295 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e) 2026-03-07 00:44:10.623307 | orchestrator | 2026-03-07 00:44:10.623318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:10.623329 | orchestrator | Saturday 07 March 2026 00:44:01 +0000 (0:00:00.919) 0:00:32.693 ******** 2026-03-07 00:44:10.623370 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:44:10.623381 | orchestrator | 2026-03-07 00:44:10.623393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.623404 | orchestrator | Saturday 07 March 2026 00:44:01 +0000 (0:00:00.723) 0:00:33.416 ******** 2026-03-07 00:44:10.623442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-07 00:44:10.623455 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-07 00:44:10.623467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-07 00:44:10.623478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-07 00:44:10.623488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-07 00:44:10.623499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-07 00:44:10.623509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-07 00:44:10.623583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-07 00:44:10.623595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-07 00:44:10.623607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-07 00:44:10.623618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-07 00:44:10.623629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-07 00:44:10.623639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-07 00:44:10.623651 | orchestrator | 2026-03-07 00:44:10.623661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.623671 | orchestrator | Saturday 07 March 2026 00:44:02 +0000 (0:00:01.076) 0:00:34.493 ******** 2026-03-07 00:44:10.623683 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.623694 | orchestrator | 2026-03-07 00:44:10.623705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.623716 | orchestrator | Saturday 07 March 2026 00:44:03 +0000 (0:00:00.198) 0:00:34.691 ******** 2026-03-07 00:44:10.623727 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.623738 | orchestrator | 2026-03-07 00:44:10.623750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.623763 | orchestrator | Saturday 07 March 2026 00:44:03 +0000 (0:00:00.254) 0:00:34.945 ******** 2026-03-07 00:44:10.623774 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.623784 | orchestrator | 2026-03-07 00:44:10.623818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.623829 | orchestrator | Saturday 07 March 2026 00:44:03 +0000 (0:00:00.251) 0:00:35.197 ******** 2026-03-07 00:44:10.623838 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.623847 | orchestrator | 2026-03-07 00:44:10.623856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.623865 | orchestrator | Saturday 07 March 2026 00:44:03 +0000 (0:00:00.225) 0:00:35.422 ******** 2026-03-07 00:44:10.623874 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.623885 | orchestrator | 2026-03-07 00:44:10.623895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.623905 | orchestrator | Saturday 07 March 2026 00:44:04 +0000 (0:00:00.272) 0:00:35.694 ******** 2026-03-07 00:44:10.623915 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.623924 | orchestrator | 2026-03-07 00:44:10.623935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.623945 | orchestrator | Saturday 07 March 2026 00:44:04 +0000 (0:00:00.222) 0:00:35.917 ******** 2026-03-07 00:44:10.623955 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.623965 | orchestrator | 2026-03-07 00:44:10.623977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.623987 | orchestrator | Saturday 07 March 2026 00:44:04 +0000 (0:00:00.223) 0:00:36.141 ******** 2026-03-07 00:44:10.624011 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.624018 | orchestrator | 2026-03-07 00:44:10.624025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.624031 | orchestrator | Saturday 07 March 2026 00:44:04 +0000 (0:00:00.229) 0:00:36.370 ******** 2026-03-07 00:44:10.624037 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-07 00:44:10.624043 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-07 00:44:10.624050 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-07 00:44:10.624056 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-07 00:44:10.624062 | orchestrator | 2026-03-07 00:44:10.624068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.624074 | orchestrator | Saturday 07 March 2026 00:44:05 +0000 (0:00:00.973) 0:00:37.344 ******** 2026-03-07 00:44:10.624080 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.624086 | orchestrator | 2026-03-07 00:44:10.624092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.624098 | orchestrator | Saturday 07 March 2026 00:44:06 +0000 (0:00:00.223) 0:00:37.568 ******** 2026-03-07 00:44:10.624112 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.624118 | orchestrator | 2026-03-07 00:44:10.624124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.624130 | orchestrator | Saturday 07 March 2026 00:44:06 +0000 (0:00:00.750) 0:00:38.319 ******** 2026-03-07 00:44:10.624137 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.624143 | orchestrator | 2026-03-07 00:44:10.624149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:10.624155 | orchestrator | Saturday 07 March 2026 00:44:07 +0000 (0:00:00.225) 0:00:38.544 ******** 2026-03-07 00:44:10.624161 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.624167 | orchestrator | 2026-03-07 00:44:10.624173 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-07 00:44:10.624179 | orchestrator | Saturday 07 March 2026 00:44:07 +0000 (0:00:00.218) 0:00:38.763 ******** 2026-03-07 00:44:10.624185 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.624191 | orchestrator | 2026-03-07 00:44:10.624197 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-07 00:44:10.624203 | orchestrator | Saturday 07 March 2026 00:44:07 +0000 (0:00:00.189) 0:00:38.952 ******** 2026-03-07 00:44:10.624209 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'}}) 2026-03-07 00:44:10.624216 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '50ec861c-6b17-5421-b6cb-257ea2a8b129'}}) 2026-03-07 00:44:10.624222 | orchestrator | 2026-03-07 00:44:10.624228 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-07 00:44:10.624234 | orchestrator | Saturday 07 March 2026 00:44:07 +0000 (0:00:00.211) 0:00:39.163 ******** 2026-03-07 00:44:10.624241 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'}) 2026-03-07 00:44:10.624249 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'}) 2026-03-07 00:44:10.624255 | orchestrator | 2026-03-07 00:44:10.624262 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-07 00:44:10.624268 | orchestrator | Saturday 07 March 2026 00:44:09 +0000 (0:00:01.732) 0:00:40.895 ******** 2026-03-07 00:44:10.624274 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:10.624281 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:10.624292 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:10.624298 | orchestrator | 2026-03-07 00:44:10.624304 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-07 00:44:10.624310 | orchestrator | Saturday 07 March 2026 00:44:09 +0000 (0:00:00.162) 0:00:41.058 ******** 2026-03-07 00:44:10.624316 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'}) 2026-03-07 00:44:10.624329 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'}) 2026-03-07 00:44:16.540941 | orchestrator | 2026-03-07 00:44:16.541977 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-07 00:44:16.542074 | orchestrator | Saturday 07 March 2026 00:44:10 +0000 (0:00:01.151) 0:00:42.210 ******** 2026-03-07 00:44:16.542084 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:16.542093 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:16.542099 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542105 | orchestrator | 2026-03-07 00:44:16.542111 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-07 00:44:16.542119 | orchestrator | Saturday 07 March 2026 00:44:10 +0000 (0:00:00.163) 0:00:42.373 ******** 2026-03-07 00:44:16.542129 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542141 | orchestrator | 2026-03-07 00:44:16.542149 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-07 00:44:16.542156 | orchestrator | Saturday 07 March 2026 00:44:11 +0000 (0:00:00.163) 0:00:42.537 ******** 2026-03-07 00:44:16.542165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:16.542173 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:16.542182 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542189 | orchestrator | 2026-03-07 00:44:16.542197 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-07 00:44:16.542205 | orchestrator | Saturday 07 March 2026 00:44:11 +0000 (0:00:00.152) 0:00:42.690 ******** 2026-03-07 00:44:16.542213 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542235 | orchestrator | 2026-03-07 00:44:16.542245 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-07 00:44:16.542253 | orchestrator | Saturday 07 March 2026 00:44:11 +0000 (0:00:00.165) 0:00:42.856 ******** 2026-03-07 00:44:16.542262 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:16.542270 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:16.542276 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542281 | orchestrator | 2026-03-07 00:44:16.542286 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-07 00:44:16.542292 | orchestrator | Saturday 07 March 2026 00:44:11 +0000 (0:00:00.368) 0:00:43.224 ******** 2026-03-07 00:44:16.542300 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542312 | orchestrator | 2026-03-07 00:44:16.542323 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-07 00:44:16.542331 | orchestrator | Saturday 07 March 2026 00:44:11 +0000 (0:00:00.138) 0:00:43.362 ******** 2026-03-07 00:44:16.542341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:16.542374 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:16.542382 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542391 | orchestrator | 2026-03-07 00:44:16.542399 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-07 00:44:16.542424 | orchestrator | Saturday 07 March 2026 00:44:12 +0000 (0:00:00.169) 0:00:43.532 ******** 2026-03-07 00:44:16.542432 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:16.542443 | orchestrator | 2026-03-07 00:44:16.542451 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-07 00:44:16.542459 | orchestrator | Saturday 07 March 2026 00:44:12 +0000 (0:00:00.177) 0:00:43.709 ******** 2026-03-07 00:44:16.542467 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:16.542476 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:16.542485 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542494 | orchestrator | 2026-03-07 00:44:16.542503 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-07 00:44:16.542512 | orchestrator | Saturday 07 March 2026 00:44:12 +0000 (0:00:00.163) 0:00:43.873 ******** 2026-03-07 00:44:16.542521 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:16.542529 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:16.542538 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542546 | orchestrator | 2026-03-07 00:44:16.542575 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-07 00:44:16.542607 | orchestrator | Saturday 07 March 2026 00:44:12 +0000 (0:00:00.185) 0:00:44.059 ******** 2026-03-07 00:44:16.542616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:16.542625 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:16.542633 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542640 | orchestrator | 2026-03-07 00:44:16.542647 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-07 00:44:16.542654 | orchestrator | Saturday 07 March 2026 00:44:12 +0000 (0:00:00.169) 0:00:44.229 ******** 2026-03-07 00:44:16.542662 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542670 | orchestrator | 2026-03-07 00:44:16.542678 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-07 00:44:16.542686 | orchestrator | Saturday 07 March 2026 00:44:12 +0000 (0:00:00.147) 0:00:44.376 ******** 2026-03-07 00:44:16.542695 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542702 | orchestrator | 2026-03-07 00:44:16.542710 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-07 00:44:16.542718 | orchestrator | Saturday 07 March 2026 00:44:13 +0000 (0:00:00.144) 0:00:44.521 ******** 2026-03-07 00:44:16.542726 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.542733 | orchestrator | 2026-03-07 00:44:16.542740 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-07 00:44:16.542748 | orchestrator | Saturday 07 March 2026 00:44:13 +0000 (0:00:00.151) 0:00:44.672 ******** 2026-03-07 00:44:16.542756 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:44:16.542764 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-07 00:44:16.542783 | orchestrator | } 2026-03-07 00:44:16.542791 | orchestrator | 2026-03-07 00:44:16.542799 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-07 00:44:16.542807 | orchestrator | Saturday 07 March 2026 00:44:13 +0000 (0:00:00.143) 0:00:44.816 ******** 2026-03-07 00:44:16.542815 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:44:16.542822 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-07 00:44:16.542829 | orchestrator | } 2026-03-07 00:44:16.542837 | orchestrator | 2026-03-07 00:44:16.542852 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-07 00:44:16.542860 | orchestrator | Saturday 07 March 2026 00:44:13 +0000 (0:00:00.172) 0:00:44.988 ******** 2026-03-07 00:44:16.542868 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:44:16.542876 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-07 00:44:16.542883 | orchestrator | } 2026-03-07 00:44:16.542892 | orchestrator | 2026-03-07 00:44:16.542899 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-07 00:44:16.542907 | orchestrator | Saturday 07 March 2026 00:44:13 +0000 (0:00:00.380) 0:00:45.369 ******** 2026-03-07 00:44:16.542915 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:16.542924 | orchestrator | 2026-03-07 00:44:16.542932 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-07 00:44:16.542941 | orchestrator | Saturday 07 March 2026 00:44:14 +0000 (0:00:00.522) 0:00:45.891 ******** 2026-03-07 00:44:16.542948 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:16.542955 | orchestrator | 2026-03-07 00:44:16.542963 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-07 00:44:16.542970 | orchestrator | Saturday 07 March 2026 00:44:14 +0000 (0:00:00.495) 0:00:46.387 ******** 2026-03-07 00:44:16.542978 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:16.542986 | orchestrator | 2026-03-07 00:44:16.542993 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-07 00:44:16.543061 | orchestrator | Saturday 07 March 2026 00:44:15 +0000 (0:00:00.494) 0:00:46.881 ******** 2026-03-07 00:44:16.543072 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:16.543080 | orchestrator | 2026-03-07 00:44:16.543088 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-07 00:44:16.543096 | orchestrator | Saturday 07 March 2026 00:44:15 +0000 (0:00:00.161) 0:00:47.043 ******** 2026-03-07 00:44:16.543104 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.543112 | orchestrator | 2026-03-07 00:44:16.543119 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-07 00:44:16.543126 | orchestrator | Saturday 07 March 2026 00:44:15 +0000 (0:00:00.109) 0:00:47.152 ******** 2026-03-07 00:44:16.543134 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.543141 | orchestrator | 2026-03-07 00:44:16.543149 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-07 00:44:16.543157 | orchestrator | Saturday 07 March 2026 00:44:15 +0000 (0:00:00.128) 0:00:47.280 ******** 2026-03-07 00:44:16.543166 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:44:16.543174 | orchestrator |  "vgs_report": { 2026-03-07 00:44:16.543183 | orchestrator |  "vg": [] 2026-03-07 00:44:16.543190 | orchestrator |  } 2026-03-07 00:44:16.543198 | orchestrator | } 2026-03-07 00:44:16.543206 | orchestrator | 2026-03-07 00:44:16.543213 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-07 00:44:16.543222 | orchestrator | Saturday 07 March 2026 00:44:15 +0000 (0:00:00.144) 0:00:47.425 ******** 2026-03-07 00:44:16.543230 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.543238 | orchestrator | 2026-03-07 00:44:16.543246 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-07 00:44:16.543255 | orchestrator | Saturday 07 March 2026 00:44:16 +0000 (0:00:00.159) 0:00:47.584 ******** 2026-03-07 00:44:16.543264 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.543272 | orchestrator | 2026-03-07 00:44:16.543279 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-07 00:44:16.543300 | orchestrator | Saturday 07 March 2026 00:44:16 +0000 (0:00:00.167) 0:00:47.752 ******** 2026-03-07 00:44:16.543309 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.543318 | orchestrator | 2026-03-07 00:44:16.543362 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-07 00:44:16.543373 | orchestrator | Saturday 07 March 2026 00:44:16 +0000 (0:00:00.151) 0:00:47.903 ******** 2026-03-07 00:44:16.543382 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:16.543392 | orchestrator | 2026-03-07 00:44:16.543415 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-07 00:44:21.248172 | orchestrator | Saturday 07 March 2026 00:44:16 +0000 (0:00:00.134) 0:00:48.038 ******** 2026-03-07 00:44:21.248266 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248282 | orchestrator | 2026-03-07 00:44:21.248296 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-07 00:44:21.248308 | orchestrator | Saturday 07 March 2026 00:44:16 +0000 (0:00:00.374) 0:00:48.412 ******** 2026-03-07 00:44:21.248319 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248337 | orchestrator | 2026-03-07 00:44:21.248356 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-07 00:44:21.248375 | orchestrator | Saturday 07 March 2026 00:44:17 +0000 (0:00:00.155) 0:00:48.568 ******** 2026-03-07 00:44:21.248394 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248406 | orchestrator | 2026-03-07 00:44:21.248416 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-07 00:44:21.248427 | orchestrator | Saturday 07 March 2026 00:44:17 +0000 (0:00:00.135) 0:00:48.703 ******** 2026-03-07 00:44:21.248438 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248449 | orchestrator | 2026-03-07 00:44:21.248459 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-07 00:44:21.248470 | orchestrator | Saturday 07 March 2026 00:44:17 +0000 (0:00:00.155) 0:00:48.858 ******** 2026-03-07 00:44:21.248480 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248491 | orchestrator | 2026-03-07 00:44:21.248502 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-07 00:44:21.248513 | orchestrator | Saturday 07 March 2026 00:44:17 +0000 (0:00:00.154) 0:00:49.013 ******** 2026-03-07 00:44:21.248523 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248534 | orchestrator | 2026-03-07 00:44:21.248545 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-07 00:44:21.248555 | orchestrator | Saturday 07 March 2026 00:44:17 +0000 (0:00:00.170) 0:00:49.184 ******** 2026-03-07 00:44:21.248566 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248577 | orchestrator | 2026-03-07 00:44:21.248656 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-07 00:44:21.248668 | orchestrator | Saturday 07 March 2026 00:44:17 +0000 (0:00:00.149) 0:00:49.333 ******** 2026-03-07 00:44:21.248693 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248705 | orchestrator | 2026-03-07 00:44:21.248718 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-07 00:44:21.248731 | orchestrator | Saturday 07 March 2026 00:44:17 +0000 (0:00:00.137) 0:00:49.470 ******** 2026-03-07 00:44:21.248744 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248757 | orchestrator | 2026-03-07 00:44:21.248770 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-07 00:44:21.248782 | orchestrator | Saturday 07 March 2026 00:44:18 +0000 (0:00:00.125) 0:00:49.596 ******** 2026-03-07 00:44:21.248794 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248806 | orchestrator | 2026-03-07 00:44:21.248820 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-07 00:44:21.248832 | orchestrator | Saturday 07 March 2026 00:44:18 +0000 (0:00:00.116) 0:00:49.712 ******** 2026-03-07 00:44:21.248845 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:21.248881 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:21.248895 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248909 | orchestrator | 2026-03-07 00:44:21.248920 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-07 00:44:21.248931 | orchestrator | Saturday 07 March 2026 00:44:18 +0000 (0:00:00.146) 0:00:49.859 ******** 2026-03-07 00:44:21.248942 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:21.248953 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:21.248964 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.248975 | orchestrator | 2026-03-07 00:44:21.248986 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-07 00:44:21.248996 | orchestrator | Saturday 07 March 2026 00:44:18 +0000 (0:00:00.172) 0:00:50.032 ******** 2026-03-07 00:44:21.249007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:21.249018 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:21.249029 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.249040 | orchestrator | 2026-03-07 00:44:21.249051 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-07 00:44:21.249061 | orchestrator | Saturday 07 March 2026 00:44:18 +0000 (0:00:00.310) 0:00:50.342 ******** 2026-03-07 00:44:21.249072 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:21.249083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:21.249094 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.249105 | orchestrator | 2026-03-07 00:44:21.249134 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-07 00:44:21.249145 | orchestrator | Saturday 07 March 2026 00:44:18 +0000 (0:00:00.132) 0:00:50.475 ******** 2026-03-07 00:44:21.249156 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:21.249168 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:21.249179 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.249190 | orchestrator | 2026-03-07 00:44:21.249201 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-07 00:44:21.249211 | orchestrator | Saturday 07 March 2026 00:44:19 +0000 (0:00:00.214) 0:00:50.690 ******** 2026-03-07 00:44:21.249222 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:21.249233 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:21.249244 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.249254 | orchestrator | 2026-03-07 00:44:21.249265 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-07 00:44:21.249276 | orchestrator | Saturday 07 March 2026 00:44:19 +0000 (0:00:00.156) 0:00:50.846 ******** 2026-03-07 00:44:21.249287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:21.249305 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:21.249316 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.249327 | orchestrator | 2026-03-07 00:44:21.249338 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-07 00:44:21.249348 | orchestrator | Saturday 07 March 2026 00:44:19 +0000 (0:00:00.150) 0:00:50.997 ******** 2026-03-07 00:44:21.249359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:21.249370 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:21.249381 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.249391 | orchestrator | 2026-03-07 00:44:21.249402 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-07 00:44:21.249413 | orchestrator | Saturday 07 March 2026 00:44:19 +0000 (0:00:00.136) 0:00:51.133 ******** 2026-03-07 00:44:21.249424 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:21.249434 | orchestrator | 2026-03-07 00:44:21.249445 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-07 00:44:21.249456 | orchestrator | Saturday 07 March 2026 00:44:20 +0000 (0:00:00.512) 0:00:51.646 ******** 2026-03-07 00:44:21.249467 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:21.249477 | orchestrator | 2026-03-07 00:44:21.249488 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-07 00:44:21.249499 | orchestrator | Saturday 07 March 2026 00:44:20 +0000 (0:00:00.538) 0:00:52.184 ******** 2026-03-07 00:44:21.249510 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:21.249520 | orchestrator | 2026-03-07 00:44:21.249531 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-07 00:44:21.249542 | orchestrator | Saturday 07 March 2026 00:44:20 +0000 (0:00:00.155) 0:00:52.340 ******** 2026-03-07 00:44:21.249553 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'vg_name': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'}) 2026-03-07 00:44:21.249565 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'vg_name': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'}) 2026-03-07 00:44:21.249576 | orchestrator | 2026-03-07 00:44:21.249609 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-07 00:44:21.249620 | orchestrator | Saturday 07 March 2026 00:44:21 +0000 (0:00:00.196) 0:00:52.537 ******** 2026-03-07 00:44:21.249631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:21.249642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:21.249653 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:21.249664 | orchestrator | 2026-03-07 00:44:21.249675 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-07 00:44:21.249685 | orchestrator | Saturday 07 March 2026 00:44:21 +0000 (0:00:00.139) 0:00:52.676 ******** 2026-03-07 00:44:21.249696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:21.249714 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:27.812769 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:27.812889 | orchestrator | 2026-03-07 00:44:27.812906 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-07 00:44:27.812919 | orchestrator | Saturday 07 March 2026 00:44:21 +0000 (0:00:00.146) 0:00:52.823 ******** 2026-03-07 00:44:27.812931 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'})  2026-03-07 00:44:27.812943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'})  2026-03-07 00:44:27.812954 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:27.812965 | orchestrator | 2026-03-07 00:44:27.812977 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-07 00:44:27.812988 | orchestrator | Saturday 07 March 2026 00:44:21 +0000 (0:00:00.182) 0:00:53.006 ******** 2026-03-07 00:44:27.812999 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:44:27.813010 | orchestrator |  "lvm_report": { 2026-03-07 00:44:27.813022 | orchestrator |  "lv": [ 2026-03-07 00:44:27.813032 | orchestrator |  { 2026-03-07 00:44:27.813043 | orchestrator |  "lv_name": "osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129", 2026-03-07 00:44:27.813055 | orchestrator |  "vg_name": "ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129" 2026-03-07 00:44:27.813066 | orchestrator |  }, 2026-03-07 00:44:27.813076 | orchestrator |  { 2026-03-07 00:44:27.813087 | orchestrator |  "lv_name": "osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c", 2026-03-07 00:44:27.813098 | orchestrator |  "vg_name": "ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c" 2026-03-07 00:44:27.813109 | orchestrator |  } 2026-03-07 00:44:27.813120 | orchestrator |  ], 2026-03-07 00:44:27.813131 | orchestrator |  "pv": [ 2026-03-07 00:44:27.813142 | orchestrator |  { 2026-03-07 00:44:27.813153 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-07 00:44:27.813169 | orchestrator |  "vg_name": "ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c" 2026-03-07 00:44:27.813180 | orchestrator |  }, 2026-03-07 00:44:27.813190 | orchestrator |  { 2026-03-07 00:44:27.813201 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-07 00:44:27.813212 | orchestrator |  "vg_name": "ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129" 2026-03-07 00:44:27.813223 | orchestrator |  } 2026-03-07 00:44:27.813234 | orchestrator |  ] 2026-03-07 00:44:27.813244 | orchestrator |  } 2026-03-07 00:44:27.813256 | orchestrator | } 2026-03-07 00:44:27.813267 | orchestrator | 2026-03-07 00:44:27.813278 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-07 00:44:27.813290 | orchestrator | 2026-03-07 00:44:27.813301 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:44:27.813312 | orchestrator | Saturday 07 March 2026 00:44:22 +0000 (0:00:00.550) 0:00:53.556 ******** 2026-03-07 00:44:27.813323 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-07 00:44:27.813334 | orchestrator | 2026-03-07 00:44:27.813345 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:44:27.813356 | orchestrator | Saturday 07 March 2026 00:44:22 +0000 (0:00:00.274) 0:00:53.831 ******** 2026-03-07 00:44:27.813367 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:44:27.813378 | orchestrator | 2026-03-07 00:44:27.813389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.813400 | orchestrator | Saturday 07 March 2026 00:44:22 +0000 (0:00:00.240) 0:00:54.071 ******** 2026-03-07 00:44:27.813411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-07 00:44:27.813422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-07 00:44:27.813433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-07 00:44:27.813443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-07 00:44:27.813461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-07 00:44:27.813472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-07 00:44:27.813483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-07 00:44:27.813494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-07 00:44:27.813505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-07 00:44:27.813521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-07 00:44:27.813532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-07 00:44:27.813542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-07 00:44:27.813585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-07 00:44:27.813596 | orchestrator | 2026-03-07 00:44:27.813607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.813618 | orchestrator | Saturday 07 March 2026 00:44:22 +0000 (0:00:00.428) 0:00:54.500 ******** 2026-03-07 00:44:27.813684 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:27.813695 | orchestrator | 2026-03-07 00:44:27.813706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.813717 | orchestrator | Saturday 07 March 2026 00:44:23 +0000 (0:00:00.233) 0:00:54.733 ******** 2026-03-07 00:44:27.813728 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:27.813739 | orchestrator | 2026-03-07 00:44:27.813750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.813778 | orchestrator | Saturday 07 March 2026 00:44:23 +0000 (0:00:00.220) 0:00:54.954 ******** 2026-03-07 00:44:27.813790 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:27.813800 | orchestrator | 2026-03-07 00:44:27.813811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.813822 | orchestrator | Saturday 07 March 2026 00:44:23 +0000 (0:00:00.235) 0:00:55.189 ******** 2026-03-07 00:44:27.813833 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:27.813843 | orchestrator | 2026-03-07 00:44:27.813854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.813865 | orchestrator | Saturday 07 March 2026 00:44:23 +0000 (0:00:00.232) 0:00:55.421 ******** 2026-03-07 00:44:27.813876 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:27.813887 | orchestrator | 2026-03-07 00:44:27.813898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.813908 | orchestrator | Saturday 07 March 2026 00:44:24 +0000 (0:00:00.724) 0:00:56.146 ******** 2026-03-07 00:44:27.813919 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:27.813930 | orchestrator | 2026-03-07 00:44:27.813941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.813952 | orchestrator | Saturday 07 March 2026 00:44:24 +0000 (0:00:00.201) 0:00:56.348 ******** 2026-03-07 00:44:27.813963 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:27.813973 | orchestrator | 2026-03-07 00:44:27.813984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.813995 | orchestrator | Saturday 07 March 2026 00:44:25 +0000 (0:00:00.250) 0:00:56.599 ******** 2026-03-07 00:44:27.814006 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:27.814073 | orchestrator | 2026-03-07 00:44:27.814086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.814097 | orchestrator | Saturday 07 March 2026 00:44:25 +0000 (0:00:00.236) 0:00:56.835 ******** 2026-03-07 00:44:27.814108 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86) 2026-03-07 00:44:27.814125 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86) 2026-03-07 00:44:27.814142 | orchestrator | 2026-03-07 00:44:27.814153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.814164 | orchestrator | Saturday 07 March 2026 00:44:25 +0000 (0:00:00.458) 0:00:57.294 ******** 2026-03-07 00:44:27.814175 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103) 2026-03-07 00:44:27.814186 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103) 2026-03-07 00:44:27.814197 | orchestrator | 2026-03-07 00:44:27.814208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.814219 | orchestrator | Saturday 07 March 2026 00:44:26 +0000 (0:00:00.446) 0:00:57.741 ******** 2026-03-07 00:44:27.814229 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc) 2026-03-07 00:44:27.814240 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc) 2026-03-07 00:44:27.814251 | orchestrator | 2026-03-07 00:44:27.814262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.814282 | orchestrator | Saturday 07 March 2026 00:44:26 +0000 (0:00:00.480) 0:00:58.222 ******** 2026-03-07 00:44:27.814302 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d) 2026-03-07 00:44:27.814322 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d) 2026-03-07 00:44:27.814340 | orchestrator | 2026-03-07 00:44:27.814360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:27.814379 | orchestrator | Saturday 07 March 2026 00:44:27 +0000 (0:00:00.437) 0:00:58.659 ******** 2026-03-07 00:44:27.814399 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:44:27.814419 | orchestrator | 2026-03-07 00:44:27.814438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:27.814460 | orchestrator | Saturday 07 March 2026 00:44:27 +0000 (0:00:00.332) 0:00:58.992 ******** 2026-03-07 00:44:27.814482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-07 00:44:27.814494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-07 00:44:27.814504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-07 00:44:27.814515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-07 00:44:27.814525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-07 00:44:27.814536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-07 00:44:27.814547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-07 00:44:27.814557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-07 00:44:27.814568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-07 00:44:27.814578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-07 00:44:27.814589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-07 00:44:27.814609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-07 00:44:36.658659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-07 00:44:36.658774 | orchestrator | 2026-03-07 00:44:36.658792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.658804 | orchestrator | Saturday 07 March 2026 00:44:27 +0000 (0:00:00.400) 0:00:59.392 ******** 2026-03-07 00:44:36.658837 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.658850 | orchestrator | 2026-03-07 00:44:36.658861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.658872 | orchestrator | Saturday 07 March 2026 00:44:28 +0000 (0:00:00.202) 0:00:59.595 ******** 2026-03-07 00:44:36.658882 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.658893 | orchestrator | 2026-03-07 00:44:36.658904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.658914 | orchestrator | Saturday 07 March 2026 00:44:28 +0000 (0:00:00.565) 0:01:00.160 ******** 2026-03-07 00:44:36.658925 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.658999 | orchestrator | 2026-03-07 00:44:36.659013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.659025 | orchestrator | Saturday 07 March 2026 00:44:28 +0000 (0:00:00.187) 0:01:00.348 ******** 2026-03-07 00:44:36.659036 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659046 | orchestrator | 2026-03-07 00:44:36.659058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.659069 | orchestrator | Saturday 07 March 2026 00:44:29 +0000 (0:00:00.202) 0:01:00.550 ******** 2026-03-07 00:44:36.659079 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659090 | orchestrator | 2026-03-07 00:44:36.659101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.659112 | orchestrator | Saturday 07 March 2026 00:44:29 +0000 (0:00:00.202) 0:01:00.753 ******** 2026-03-07 00:44:36.659123 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659134 | orchestrator | 2026-03-07 00:44:36.659157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.659168 | orchestrator | Saturday 07 March 2026 00:44:29 +0000 (0:00:00.203) 0:01:00.956 ******** 2026-03-07 00:44:36.659179 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659190 | orchestrator | 2026-03-07 00:44:36.659201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.659214 | orchestrator | Saturday 07 March 2026 00:44:29 +0000 (0:00:00.209) 0:01:01.166 ******** 2026-03-07 00:44:36.659257 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659271 | orchestrator | 2026-03-07 00:44:36.659284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.659297 | orchestrator | Saturday 07 March 2026 00:44:29 +0000 (0:00:00.213) 0:01:01.380 ******** 2026-03-07 00:44:36.659310 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-07 00:44:36.659323 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-07 00:44:36.659336 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-07 00:44:36.659348 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-07 00:44:36.659360 | orchestrator | 2026-03-07 00:44:36.659373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.659386 | orchestrator | Saturday 07 March 2026 00:44:30 +0000 (0:00:00.615) 0:01:01.995 ******** 2026-03-07 00:44:36.659399 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659411 | orchestrator | 2026-03-07 00:44:36.659423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.659437 | orchestrator | Saturday 07 March 2026 00:44:30 +0000 (0:00:00.192) 0:01:02.188 ******** 2026-03-07 00:44:36.659449 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659459 | orchestrator | 2026-03-07 00:44:36.659470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.659481 | orchestrator | Saturday 07 March 2026 00:44:30 +0000 (0:00:00.194) 0:01:02.382 ******** 2026-03-07 00:44:36.659492 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659502 | orchestrator | 2026-03-07 00:44:36.659524 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:36.659535 | orchestrator | Saturday 07 March 2026 00:44:31 +0000 (0:00:00.201) 0:01:02.584 ******** 2026-03-07 00:44:36.659553 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659564 | orchestrator | 2026-03-07 00:44:36.659575 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-07 00:44:36.659586 | orchestrator | Saturday 07 March 2026 00:44:31 +0000 (0:00:00.202) 0:01:02.786 ******** 2026-03-07 00:44:36.659596 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659607 | orchestrator | 2026-03-07 00:44:36.659617 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-07 00:44:36.659628 | orchestrator | Saturday 07 March 2026 00:44:31 +0000 (0:00:00.292) 0:01:03.079 ******** 2026-03-07 00:44:36.659639 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f3e458ba-b75f-5cb4-a1c9-e61fe3486295'}}) 2026-03-07 00:44:36.659650 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5cfbeba1-5550-585b-8a7e-42a4921f8eca'}}) 2026-03-07 00:44:36.659661 | orchestrator | 2026-03-07 00:44:36.659672 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-07 00:44:36.659699 | orchestrator | Saturday 07 March 2026 00:44:31 +0000 (0:00:00.185) 0:01:03.264 ******** 2026-03-07 00:44:36.659738 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'}) 2026-03-07 00:44:36.659780 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'}) 2026-03-07 00:44:36.659791 | orchestrator | 2026-03-07 00:44:36.659802 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-07 00:44:36.659831 | orchestrator | Saturday 07 March 2026 00:44:33 +0000 (0:00:01.869) 0:01:05.133 ******** 2026-03-07 00:44:36.659843 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:36.659855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:36.659866 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659877 | orchestrator | 2026-03-07 00:44:36.659889 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-07 00:44:36.659899 | orchestrator | Saturday 07 March 2026 00:44:33 +0000 (0:00:00.205) 0:01:05.339 ******** 2026-03-07 00:44:36.659910 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'}) 2026-03-07 00:44:36.659921 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'}) 2026-03-07 00:44:36.659932 | orchestrator | 2026-03-07 00:44:36.659943 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-07 00:44:36.659954 | orchestrator | Saturday 07 March 2026 00:44:35 +0000 (0:00:01.354) 0:01:06.693 ******** 2026-03-07 00:44:36.659964 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:36.659975 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:36.659986 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.659997 | orchestrator | 2026-03-07 00:44:36.660008 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-07 00:44:36.660019 | orchestrator | Saturday 07 March 2026 00:44:35 +0000 (0:00:00.167) 0:01:06.861 ******** 2026-03-07 00:44:36.660029 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.660040 | orchestrator | 2026-03-07 00:44:36.660051 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-07 00:44:36.660062 | orchestrator | Saturday 07 March 2026 00:44:35 +0000 (0:00:00.164) 0:01:07.026 ******** 2026-03-07 00:44:36.660080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:36.660092 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:36.660102 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.660113 | orchestrator | 2026-03-07 00:44:36.660124 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-07 00:44:36.660135 | orchestrator | Saturday 07 March 2026 00:44:35 +0000 (0:00:00.183) 0:01:07.209 ******** 2026-03-07 00:44:36.660145 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.660156 | orchestrator | 2026-03-07 00:44:36.660167 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-07 00:44:36.660178 | orchestrator | Saturday 07 March 2026 00:44:35 +0000 (0:00:00.135) 0:01:07.345 ******** 2026-03-07 00:44:36.660188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:36.660199 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:36.660210 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.660221 | orchestrator | 2026-03-07 00:44:36.660232 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-07 00:44:36.660266 | orchestrator | Saturday 07 March 2026 00:44:35 +0000 (0:00:00.149) 0:01:07.494 ******** 2026-03-07 00:44:36.660288 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.660308 | orchestrator | 2026-03-07 00:44:36.660325 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-07 00:44:36.660337 | orchestrator | Saturday 07 March 2026 00:44:36 +0000 (0:00:00.133) 0:01:07.628 ******** 2026-03-07 00:44:36.660347 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:36.660358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:36.660369 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:36.660379 | orchestrator | 2026-03-07 00:44:36.660390 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-07 00:44:36.660401 | orchestrator | Saturday 07 March 2026 00:44:36 +0000 (0:00:00.157) 0:01:07.786 ******** 2026-03-07 00:44:36.660411 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:44:36.660422 | orchestrator | 2026-03-07 00:44:36.660432 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-07 00:44:36.660443 | orchestrator | Saturday 07 March 2026 00:44:36 +0000 (0:00:00.305) 0:01:08.091 ******** 2026-03-07 00:44:36.660462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:42.942385 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:42.942512 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.942528 | orchestrator | 2026-03-07 00:44:42.942538 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-07 00:44:42.942549 | orchestrator | Saturday 07 March 2026 00:44:36 +0000 (0:00:00.180) 0:01:08.271 ******** 2026-03-07 00:44:42.942557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:42.942566 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:42.942595 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.942603 | orchestrator | 2026-03-07 00:44:42.942612 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-07 00:44:42.942620 | orchestrator | Saturday 07 March 2026 00:44:36 +0000 (0:00:00.154) 0:01:08.425 ******** 2026-03-07 00:44:42.942628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:42.942636 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:42.942644 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.942652 | orchestrator | 2026-03-07 00:44:42.942660 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-07 00:44:42.942680 | orchestrator | Saturday 07 March 2026 00:44:37 +0000 (0:00:00.151) 0:01:08.577 ******** 2026-03-07 00:44:42.942688 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.942696 | orchestrator | 2026-03-07 00:44:42.942704 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-07 00:44:42.942740 | orchestrator | Saturday 07 March 2026 00:44:37 +0000 (0:00:00.144) 0:01:08.721 ******** 2026-03-07 00:44:42.942749 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.942757 | orchestrator | 2026-03-07 00:44:42.942765 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-07 00:44:42.942773 | orchestrator | Saturday 07 March 2026 00:44:37 +0000 (0:00:00.135) 0:01:08.857 ******** 2026-03-07 00:44:42.942780 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.942788 | orchestrator | 2026-03-07 00:44:42.942796 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-07 00:44:42.942804 | orchestrator | Saturday 07 March 2026 00:44:37 +0000 (0:00:00.142) 0:01:08.999 ******** 2026-03-07 00:44:42.942812 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:44:42.942820 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-07 00:44:42.942829 | orchestrator | } 2026-03-07 00:44:42.942850 | orchestrator | 2026-03-07 00:44:42.942858 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-07 00:44:42.942875 | orchestrator | Saturday 07 March 2026 00:44:37 +0000 (0:00:00.136) 0:01:09.136 ******** 2026-03-07 00:44:42.942884 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:44:42.942892 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-07 00:44:42.942901 | orchestrator | } 2026-03-07 00:44:42.942910 | orchestrator | 2026-03-07 00:44:42.942919 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-07 00:44:42.942928 | orchestrator | Saturday 07 March 2026 00:44:37 +0000 (0:00:00.125) 0:01:09.262 ******** 2026-03-07 00:44:42.942937 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:44:42.942947 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-07 00:44:42.942956 | orchestrator | } 2026-03-07 00:44:42.942966 | orchestrator | 2026-03-07 00:44:42.942975 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-07 00:44:42.942984 | orchestrator | Saturday 07 March 2026 00:44:37 +0000 (0:00:00.151) 0:01:09.413 ******** 2026-03-07 00:44:42.942993 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:44:42.943002 | orchestrator | 2026-03-07 00:44:42.943010 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-07 00:44:42.943020 | orchestrator | Saturday 07 March 2026 00:44:38 +0000 (0:00:00.516) 0:01:09.930 ******** 2026-03-07 00:44:42.943028 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:44:42.943037 | orchestrator | 2026-03-07 00:44:42.943046 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-07 00:44:42.943055 | orchestrator | Saturday 07 March 2026 00:44:38 +0000 (0:00:00.511) 0:01:10.442 ******** 2026-03-07 00:44:42.943064 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:44:42.943081 | orchestrator | 2026-03-07 00:44:42.943090 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-07 00:44:42.943099 | orchestrator | Saturday 07 March 2026 00:44:39 +0000 (0:00:00.721) 0:01:11.163 ******** 2026-03-07 00:44:42.943108 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:44:42.943117 | orchestrator | 2026-03-07 00:44:42.943126 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-07 00:44:42.943134 | orchestrator | Saturday 07 March 2026 00:44:39 +0000 (0:00:00.140) 0:01:11.304 ******** 2026-03-07 00:44:42.943142 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943149 | orchestrator | 2026-03-07 00:44:42.943157 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-07 00:44:42.943165 | orchestrator | Saturday 07 March 2026 00:44:39 +0000 (0:00:00.105) 0:01:11.410 ******** 2026-03-07 00:44:42.943173 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943181 | orchestrator | 2026-03-07 00:44:42.943192 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-07 00:44:42.943205 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.132) 0:01:11.543 ******** 2026-03-07 00:44:42.943219 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:44:42.943233 | orchestrator |  "vgs_report": { 2026-03-07 00:44:42.943248 | orchestrator |  "vg": [] 2026-03-07 00:44:42.943282 | orchestrator |  } 2026-03-07 00:44:42.943298 | orchestrator | } 2026-03-07 00:44:42.943311 | orchestrator | 2026-03-07 00:44:42.943324 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-07 00:44:42.943338 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.136) 0:01:11.679 ******** 2026-03-07 00:44:42.943352 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943368 | orchestrator | 2026-03-07 00:44:42.943383 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-07 00:44:42.943396 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.148) 0:01:11.828 ******** 2026-03-07 00:44:42.943410 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943422 | orchestrator | 2026-03-07 00:44:42.943433 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-07 00:44:42.943447 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.136) 0:01:11.965 ******** 2026-03-07 00:44:42.943462 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943477 | orchestrator | 2026-03-07 00:44:42.943492 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-07 00:44:42.943506 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.121) 0:01:12.087 ******** 2026-03-07 00:44:42.943519 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943527 | orchestrator | 2026-03-07 00:44:42.943535 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-07 00:44:42.943543 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.150) 0:01:12.237 ******** 2026-03-07 00:44:42.943551 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943559 | orchestrator | 2026-03-07 00:44:42.943567 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-07 00:44:42.943575 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.133) 0:01:12.370 ******** 2026-03-07 00:44:42.943582 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943590 | orchestrator | 2026-03-07 00:44:42.943598 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-07 00:44:42.943613 | orchestrator | Saturday 07 March 2026 00:44:41 +0000 (0:00:00.148) 0:01:12.518 ******** 2026-03-07 00:44:42.943621 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943628 | orchestrator | 2026-03-07 00:44:42.943636 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-07 00:44:42.943644 | orchestrator | Saturday 07 March 2026 00:44:41 +0000 (0:00:00.149) 0:01:12.667 ******** 2026-03-07 00:44:42.943652 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943659 | orchestrator | 2026-03-07 00:44:42.943667 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-07 00:44:42.943683 | orchestrator | Saturday 07 March 2026 00:44:41 +0000 (0:00:00.374) 0:01:13.042 ******** 2026-03-07 00:44:42.943691 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943698 | orchestrator | 2026-03-07 00:44:42.943706 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-07 00:44:42.943748 | orchestrator | Saturday 07 March 2026 00:44:41 +0000 (0:00:00.170) 0:01:13.212 ******** 2026-03-07 00:44:42.943758 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943766 | orchestrator | 2026-03-07 00:44:42.943774 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-07 00:44:42.943782 | orchestrator | Saturday 07 March 2026 00:44:41 +0000 (0:00:00.146) 0:01:13.359 ******** 2026-03-07 00:44:42.943790 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943798 | orchestrator | 2026-03-07 00:44:42.943806 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-07 00:44:42.943814 | orchestrator | Saturday 07 March 2026 00:44:42 +0000 (0:00:00.214) 0:01:13.573 ******** 2026-03-07 00:44:42.943821 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943829 | orchestrator | 2026-03-07 00:44:42.943837 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-07 00:44:42.943845 | orchestrator | Saturday 07 March 2026 00:44:42 +0000 (0:00:00.146) 0:01:13.720 ******** 2026-03-07 00:44:42.943853 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943860 | orchestrator | 2026-03-07 00:44:42.943868 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-07 00:44:42.943876 | orchestrator | Saturday 07 March 2026 00:44:42 +0000 (0:00:00.148) 0:01:13.868 ******** 2026-03-07 00:44:42.943884 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943892 | orchestrator | 2026-03-07 00:44:42.943899 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-07 00:44:42.943907 | orchestrator | Saturday 07 March 2026 00:44:42 +0000 (0:00:00.162) 0:01:14.031 ******** 2026-03-07 00:44:42.943915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:42.943923 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:42.943931 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943939 | orchestrator | 2026-03-07 00:44:42.943947 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-07 00:44:42.943955 | orchestrator | Saturday 07 March 2026 00:44:42 +0000 (0:00:00.184) 0:01:14.215 ******** 2026-03-07 00:44:42.943962 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:42.943970 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:42.943978 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:42.943986 | orchestrator | 2026-03-07 00:44:42.943994 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-07 00:44:42.944002 | orchestrator | Saturday 07 March 2026 00:44:42 +0000 (0:00:00.154) 0:01:14.370 ******** 2026-03-07 00:44:42.944019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:46.378327 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:46.378452 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:46.378467 | orchestrator | 2026-03-07 00:44:46.378480 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-07 00:44:46.378491 | orchestrator | Saturday 07 March 2026 00:44:43 +0000 (0:00:00.161) 0:01:14.531 ******** 2026-03-07 00:44:46.378523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:46.378533 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:46.378543 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:46.378553 | orchestrator | 2026-03-07 00:44:46.378563 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-07 00:44:46.378573 | orchestrator | Saturday 07 March 2026 00:44:43 +0000 (0:00:00.221) 0:01:14.753 ******** 2026-03-07 00:44:46.378582 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:46.378607 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:46.378617 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:46.378626 | orchestrator | 2026-03-07 00:44:46.378636 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-07 00:44:46.378646 | orchestrator | Saturday 07 March 2026 00:44:43 +0000 (0:00:00.185) 0:01:14.938 ******** 2026-03-07 00:44:46.378655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:46.378665 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:46.378675 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:46.378684 | orchestrator | 2026-03-07 00:44:46.378694 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-07 00:44:46.378704 | orchestrator | Saturday 07 March 2026 00:44:43 +0000 (0:00:00.441) 0:01:15.379 ******** 2026-03-07 00:44:46.378714 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:46.378724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:46.378800 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:46.378820 | orchestrator | 2026-03-07 00:44:46.378839 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-07 00:44:46.378857 | orchestrator | Saturday 07 March 2026 00:44:44 +0000 (0:00:00.174) 0:01:15.554 ******** 2026-03-07 00:44:46.378875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:46.378894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:46.378911 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:46.378931 | orchestrator | 2026-03-07 00:44:46.378951 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-07 00:44:46.378970 | orchestrator | Saturday 07 March 2026 00:44:44 +0000 (0:00:00.169) 0:01:15.723 ******** 2026-03-07 00:44:46.378982 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:44:46.378995 | orchestrator | 2026-03-07 00:44:46.379007 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-07 00:44:46.379018 | orchestrator | Saturday 07 March 2026 00:44:44 +0000 (0:00:00.560) 0:01:16.284 ******** 2026-03-07 00:44:46.379029 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:44:46.379040 | orchestrator | 2026-03-07 00:44:46.379051 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-07 00:44:46.379073 | orchestrator | Saturday 07 March 2026 00:44:45 +0000 (0:00:00.536) 0:01:16.820 ******** 2026-03-07 00:44:46.379084 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:44:46.379095 | orchestrator | 2026-03-07 00:44:46.379106 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-07 00:44:46.379115 | orchestrator | Saturday 07 March 2026 00:44:45 +0000 (0:00:00.159) 0:01:16.980 ******** 2026-03-07 00:44:46.379125 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'vg_name': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'}) 2026-03-07 00:44:46.379137 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'vg_name': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'}) 2026-03-07 00:44:46.379146 | orchestrator | 2026-03-07 00:44:46.379156 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-07 00:44:46.379165 | orchestrator | Saturday 07 March 2026 00:44:45 +0000 (0:00:00.177) 0:01:17.158 ******** 2026-03-07 00:44:46.379194 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:46.379204 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:46.379214 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:46.379223 | orchestrator | 2026-03-07 00:44:46.379233 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-07 00:44:46.379243 | orchestrator | Saturday 07 March 2026 00:44:45 +0000 (0:00:00.181) 0:01:17.339 ******** 2026-03-07 00:44:46.379252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:46.379261 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:46.379271 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:46.379280 | orchestrator | 2026-03-07 00:44:46.379290 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-07 00:44:46.379299 | orchestrator | Saturday 07 March 2026 00:44:46 +0000 (0:00:00.174) 0:01:17.513 ******** 2026-03-07 00:44:46.379309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'})  2026-03-07 00:44:46.379319 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'})  2026-03-07 00:44:46.379328 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:44:46.379338 | orchestrator | 2026-03-07 00:44:46.379347 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-07 00:44:46.379357 | orchestrator | Saturday 07 March 2026 00:44:46 +0000 (0:00:00.178) 0:01:17.692 ******** 2026-03-07 00:44:46.379367 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:44:46.379376 | orchestrator |  "lvm_report": { 2026-03-07 00:44:46.379386 | orchestrator |  "lv": [ 2026-03-07 00:44:46.379396 | orchestrator |  { 2026-03-07 00:44:46.379406 | orchestrator |  "lv_name": "osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca", 2026-03-07 00:44:46.379416 | orchestrator |  "vg_name": "ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca" 2026-03-07 00:44:46.379425 | orchestrator |  }, 2026-03-07 00:44:46.379435 | orchestrator |  { 2026-03-07 00:44:46.379444 | orchestrator |  "lv_name": "osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295", 2026-03-07 00:44:46.379454 | orchestrator |  "vg_name": "ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295" 2026-03-07 00:44:46.379463 | orchestrator |  } 2026-03-07 00:44:46.379473 | orchestrator |  ], 2026-03-07 00:44:46.379482 | orchestrator |  "pv": [ 2026-03-07 00:44:46.379499 | orchestrator |  { 2026-03-07 00:44:46.379509 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-07 00:44:46.379519 | orchestrator |  "vg_name": "ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295" 2026-03-07 00:44:46.379528 | orchestrator |  }, 2026-03-07 00:44:46.379537 | orchestrator |  { 2026-03-07 00:44:46.379547 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-07 00:44:46.379557 | orchestrator |  "vg_name": "ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca" 2026-03-07 00:44:46.379566 | orchestrator |  } 2026-03-07 00:44:46.379575 | orchestrator |  ] 2026-03-07 00:44:46.379585 | orchestrator |  } 2026-03-07 00:44:46.379594 | orchestrator | } 2026-03-07 00:44:46.379604 | orchestrator | 2026-03-07 00:44:46.379613 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:44:46.379623 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-07 00:44:46.379633 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-07 00:44:46.379643 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-07 00:44:46.379652 | orchestrator | 2026-03-07 00:44:46.379662 | orchestrator | 2026-03-07 00:44:46.379671 | orchestrator | 2026-03-07 00:44:46.379681 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:44:46.379691 | orchestrator | Saturday 07 March 2026 00:44:46 +0000 (0:00:00.168) 0:01:17.860 ******** 2026-03-07 00:44:46.379700 | orchestrator | =============================================================================== 2026-03-07 00:44:46.379710 | orchestrator | Create block VGs -------------------------------------------------------- 5.64s 2026-03-07 00:44:46.379719 | orchestrator | Create block LVs -------------------------------------------------------- 3.99s 2026-03-07 00:44:46.379729 | orchestrator | Add known partitions to the list of available block devices ------------- 1.98s 2026-03-07 00:44:46.379763 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.75s 2026-03-07 00:44:46.379784 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.73s 2026-03-07 00:44:46.379793 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.62s 2026-03-07 00:44:46.379803 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.61s 2026-03-07 00:44:46.379813 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2026-03-07 00:44:46.379830 | orchestrator | Add known links to the list of available block devices ------------------ 1.47s 2026-03-07 00:44:46.874809 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2026-03-07 00:44:46.874915 | orchestrator | Print LVM report data --------------------------------------------------- 1.05s 2026-03-07 00:44:46.874928 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2026-03-07 00:44:46.874940 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2026-03-07 00:44:46.874950 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2026-03-07 00:44:46.874961 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.81s 2026-03-07 00:44:46.874972 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2026-03-07 00:44:46.874983 | orchestrator | Get initial list of available block devices ----------------------------- 0.76s 2026-03-07 00:44:46.874993 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.76s 2026-03-07 00:44:46.875004 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-03-07 00:44:46.875015 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.73s 2026-03-07 00:44:59.562628 | orchestrator | 2026-03-07 00:44:59 | INFO  | Prepare task for execution of facts. 2026-03-07 00:44:59.630598 | orchestrator | 2026-03-07 00:44:59 | INFO  | Task 2af29075-3bf6-4716-b3d7-a315065384ce (facts) was prepared for execution. 2026-03-07 00:44:59.630893 | orchestrator | 2026-03-07 00:44:59 | INFO  | It takes a moment until task 2af29075-3bf6-4716-b3d7-a315065384ce (facts) has been started and output is visible here. 2026-03-07 00:45:12.782568 | orchestrator | 2026-03-07 00:45:12.782709 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-07 00:45:12.782738 | orchestrator | 2026-03-07 00:45:12.782760 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-07 00:45:12.782779 | orchestrator | Saturday 07 March 2026 00:45:04 +0000 (0:00:00.306) 0:00:00.306 ******** 2026-03-07 00:45:12.782798 | orchestrator | ok: [testbed-manager] 2026-03-07 00:45:12.782819 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:45:12.782838 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:45:12.782857 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:45:12.782876 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:45:12.782986 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:45:12.782999 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:12.783010 | orchestrator | 2026-03-07 00:45:12.783021 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-07 00:45:12.783032 | orchestrator | Saturday 07 March 2026 00:45:05 +0000 (0:00:01.120) 0:00:01.426 ******** 2026-03-07 00:45:12.783043 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:45:12.783054 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:45:12.783065 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:45:12.783078 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:45:12.783090 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:45:12.783103 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:45:12.783115 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:12.783128 | orchestrator | 2026-03-07 00:45:12.783140 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-07 00:45:12.783152 | orchestrator | 2026-03-07 00:45:12.783165 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:45:12.783177 | orchestrator | Saturday 07 March 2026 00:45:06 +0000 (0:00:01.337) 0:00:02.764 ******** 2026-03-07 00:45:12.783190 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:45:12.783203 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:45:12.783215 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:45:12.783227 | orchestrator | ok: [testbed-manager] 2026-03-07 00:45:12.783240 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:45:12.783252 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:45:12.783264 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:12.783276 | orchestrator | 2026-03-07 00:45:12.783288 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-07 00:45:12.783300 | orchestrator | 2026-03-07 00:45:12.783313 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-07 00:45:12.783325 | orchestrator | Saturday 07 March 2026 00:45:11 +0000 (0:00:04.933) 0:00:07.698 ******** 2026-03-07 00:45:12.783337 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:45:12.783349 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:45:12.783362 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:45:12.783374 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:45:12.783386 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:45:12.783398 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:45:12.783411 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:12.783424 | orchestrator | 2026-03-07 00:45:12.783435 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:45:12.783446 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:12.783458 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:12.783502 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:12.783513 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:12.783524 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:12.783535 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:12.783546 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:12.783556 | orchestrator | 2026-03-07 00:45:12.783567 | orchestrator | 2026-03-07 00:45:12.783578 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:45:12.783588 | orchestrator | Saturday 07 March 2026 00:45:12 +0000 (0:00:00.721) 0:00:08.420 ******** 2026-03-07 00:45:12.783599 | orchestrator | =============================================================================== 2026-03-07 00:45:12.783610 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.93s 2026-03-07 00:45:12.783621 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.34s 2026-03-07 00:45:12.783631 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-03-07 00:45:12.783643 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2026-03-07 00:45:25.358393 | orchestrator | 2026-03-07 00:45:25 | INFO  | Prepare task for execution of frr. 2026-03-07 00:45:25.436907 | orchestrator | 2026-03-07 00:45:25 | INFO  | Task e96a81ea-a0f2-4b01-92a1-ef9195bb2479 (frr) was prepared for execution. 2026-03-07 00:45:25.437369 | orchestrator | 2026-03-07 00:45:25 | INFO  | It takes a moment until task e96a81ea-a0f2-4b01-92a1-ef9195bb2479 (frr) has been started and output is visible here. 2026-03-07 00:45:56.071475 | orchestrator | 2026-03-07 00:45:56.071610 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-07 00:45:56.071625 | orchestrator | 2026-03-07 00:45:56.071635 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-07 00:45:56.071643 | orchestrator | Saturday 07 March 2026 00:45:30 +0000 (0:00:00.249) 0:00:00.249 ******** 2026-03-07 00:45:56.071652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:45:56.071661 | orchestrator | 2026-03-07 00:45:56.071669 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-07 00:45:56.071677 | orchestrator | Saturday 07 March 2026 00:45:30 +0000 (0:00:00.225) 0:00:00.474 ******** 2026-03-07 00:45:56.071685 | orchestrator | changed: [testbed-manager] 2026-03-07 00:45:56.071694 | orchestrator | 2026-03-07 00:45:56.071702 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-07 00:45:56.071710 | orchestrator | Saturday 07 March 2026 00:45:31 +0000 (0:00:01.254) 0:00:01.729 ******** 2026-03-07 00:45:56.071718 | orchestrator | changed: [testbed-manager] 2026-03-07 00:45:56.071726 | orchestrator | 2026-03-07 00:45:56.071734 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-07 00:45:56.071742 | orchestrator | Saturday 07 March 2026 00:45:42 +0000 (0:00:11.061) 0:00:12.790 ******** 2026-03-07 00:45:56.071750 | orchestrator | ok: [testbed-manager] 2026-03-07 00:45:56.071759 | orchestrator | 2026-03-07 00:45:56.071767 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-07 00:45:56.071775 | orchestrator | Saturday 07 March 2026 00:45:43 +0000 (0:00:01.124) 0:00:13.915 ******** 2026-03-07 00:45:56.071783 | orchestrator | changed: [testbed-manager] 2026-03-07 00:45:56.071809 | orchestrator | 2026-03-07 00:45:56.071822 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-07 00:45:56.071835 | orchestrator | Saturday 07 March 2026 00:45:44 +0000 (0:00:00.917) 0:00:14.832 ******** 2026-03-07 00:45:56.071848 | orchestrator | ok: [testbed-manager] 2026-03-07 00:45:56.071862 | orchestrator | 2026-03-07 00:45:56.071875 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-07 00:45:56.071888 | orchestrator | Saturday 07 March 2026 00:45:45 +0000 (0:00:01.238) 0:00:16.070 ******** 2026-03-07 00:45:56.071901 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:45:56.071915 | orchestrator | 2026-03-07 00:45:56.071928 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-07 00:45:56.071940 | orchestrator | Saturday 07 March 2026 00:45:46 +0000 (0:00:00.157) 0:00:16.227 ******** 2026-03-07 00:45:56.071954 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:45:56.071967 | orchestrator | 2026-03-07 00:45:56.071980 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-07 00:45:56.071993 | orchestrator | Saturday 07 March 2026 00:45:46 +0000 (0:00:00.164) 0:00:16.392 ******** 2026-03-07 00:45:56.072005 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:45:56.072018 | orchestrator | 2026-03-07 00:45:56.072031 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-07 00:45:56.072044 | orchestrator | Saturday 07 March 2026 00:45:46 +0000 (0:00:00.170) 0:00:16.563 ******** 2026-03-07 00:45:56.072057 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:45:56.072069 | orchestrator | 2026-03-07 00:45:56.072083 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-07 00:45:56.072097 | orchestrator | Saturday 07 March 2026 00:45:46 +0000 (0:00:00.164) 0:00:16.727 ******** 2026-03-07 00:45:56.072110 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:45:56.072147 | orchestrator | 2026-03-07 00:45:56.072160 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-07 00:45:56.072174 | orchestrator | Saturday 07 March 2026 00:45:46 +0000 (0:00:00.180) 0:00:16.907 ******** 2026-03-07 00:45:56.072187 | orchestrator | changed: [testbed-manager] 2026-03-07 00:45:56.072200 | orchestrator | 2026-03-07 00:45:56.072210 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-07 00:45:56.072217 | orchestrator | Saturday 07 March 2026 00:45:48 +0000 (0:00:01.263) 0:00:18.171 ******** 2026-03-07 00:45:56.072225 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-07 00:45:56.072234 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-07 00:45:56.072244 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-07 00:45:56.072252 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-07 00:45:56.072259 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-07 00:45:56.072268 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-07 00:45:56.072275 | orchestrator | 2026-03-07 00:45:56.072283 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-07 00:45:56.072291 | orchestrator | Saturday 07 March 2026 00:45:52 +0000 (0:00:04.614) 0:00:22.785 ******** 2026-03-07 00:45:56.072299 | orchestrator | ok: [testbed-manager] 2026-03-07 00:45:56.072307 | orchestrator | 2026-03-07 00:45:56.072315 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-07 00:45:56.072323 | orchestrator | Saturday 07 March 2026 00:45:54 +0000 (0:00:01.448) 0:00:24.234 ******** 2026-03-07 00:45:56.072330 | orchestrator | changed: [testbed-manager] 2026-03-07 00:45:56.072338 | orchestrator | 2026-03-07 00:45:56.072346 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:45:56.072365 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 00:45:56.072373 | orchestrator | 2026-03-07 00:45:56.072381 | orchestrator | 2026-03-07 00:45:56.072413 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:45:56.072422 | orchestrator | Saturday 07 March 2026 00:45:55 +0000 (0:00:01.527) 0:00:25.762 ******** 2026-03-07 00:45:56.072430 | orchestrator | =============================================================================== 2026-03-07 00:45:56.072438 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.06s 2026-03-07 00:45:56.072445 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 4.61s 2026-03-07 00:45:56.072454 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.53s 2026-03-07 00:45:56.072461 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.45s 2026-03-07 00:45:56.072469 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.26s 2026-03-07 00:45:56.072477 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.25s 2026-03-07 00:45:56.072485 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.24s 2026-03-07 00:45:56.072492 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.12s 2026-03-07 00:45:56.072500 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.92s 2026-03-07 00:45:56.072508 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-03-07 00:45:56.072516 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-03-07 00:45:56.072523 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.17s 2026-03-07 00:45:56.072531 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.16s 2026-03-07 00:45:56.072539 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-03-07 00:45:56.072547 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-03-07 00:45:56.449751 | orchestrator | 2026-03-07 00:45:56.451502 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Mar 7 00:45:56 UTC 2026 2026-03-07 00:45:56.451553 | orchestrator | 2026-03-07 00:45:58.607748 | orchestrator | 2026-03-07 00:45:58 | INFO  | Collection nutshell is prepared for execution 2026-03-07 00:45:58.608821 | orchestrator | 2026-03-07 00:45:58 | INFO  | A [0] - dotfiles 2026-03-07 00:46:08.674585 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [0] - homer 2026-03-07 00:46:08.674706 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [0] - netdata 2026-03-07 00:46:08.674719 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [0] - openstackclient 2026-03-07 00:46:08.674729 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [0] - phpmyadmin 2026-03-07 00:46:08.674737 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [0] - common 2026-03-07 00:46:08.680676 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [1] -- loadbalancer 2026-03-07 00:46:08.680826 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [2] --- opensearch 2026-03-07 00:46:08.680842 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [2] --- mariadb-ng 2026-03-07 00:46:08.680865 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [3] ---- horizon 2026-03-07 00:46:08.680878 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [3] ---- keystone 2026-03-07 00:46:08.680890 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [4] ----- neutron 2026-03-07 00:46:08.680901 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [5] ------ wait-for-nova 2026-03-07 00:46:08.680913 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [6] ------- octavia 2026-03-07 00:46:08.682683 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [4] ----- barbican 2026-03-07 00:46:08.682939 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [4] ----- designate 2026-03-07 00:46:08.682964 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [4] ----- ironic 2026-03-07 00:46:08.682975 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [4] ----- placement 2026-03-07 00:46:08.682999 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [4] ----- magnum 2026-03-07 00:46:08.683593 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [1] -- openvswitch 2026-03-07 00:46:08.683837 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [2] --- ovn 2026-03-07 00:46:08.684026 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [1] -- memcached 2026-03-07 00:46:08.684052 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [1] -- redis 2026-03-07 00:46:08.684064 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [1] -- rabbitmq-ng 2026-03-07 00:46:08.684742 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [0] - kubernetes 2026-03-07 00:46:08.688368 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [1] -- kubeconfig 2026-03-07 00:46:08.688459 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [1] -- copy-kubeconfig 2026-03-07 00:46:08.688480 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [0] - ceph 2026-03-07 00:46:08.692499 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [1] -- ceph-pools 2026-03-07 00:46:08.692712 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [2] --- copy-ceph-keys 2026-03-07 00:46:08.692745 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [3] ---- cephclient 2026-03-07 00:46:08.692765 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-07 00:46:08.692819 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [4] ----- wait-for-keystone 2026-03-07 00:46:08.692856 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-07 00:46:08.692877 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [5] ------ glance 2026-03-07 00:46:08.692898 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [5] ------ cinder 2026-03-07 00:46:08.692918 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [5] ------ nova 2026-03-07 00:46:08.693327 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [4] ----- prometheus 2026-03-07 00:46:08.693347 | orchestrator | 2026-03-07 00:46:08 | INFO  | A [5] ------ grafana 2026-03-07 00:46:08.946431 | orchestrator | 2026-03-07 00:46:08 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-07 00:46:08.946571 | orchestrator | 2026-03-07 00:46:08 | INFO  | Tasks are running in the background 2026-03-07 00:46:12.620765 | orchestrator | 2026-03-07 00:46:12 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-07 00:46:14.785998 | orchestrator | 2026-03-07 00:46:14 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:14.789720 | orchestrator | 2026-03-07 00:46:14 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:14.793465 | orchestrator | 2026-03-07 00:46:14 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:14.794128 | orchestrator | 2026-03-07 00:46:14 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:14.795602 | orchestrator | 2026-03-07 00:46:14 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:14.795849 | orchestrator | 2026-03-07 00:46:14 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state STARTED 2026-03-07 00:46:14.800284 | orchestrator | 2026-03-07 00:46:14 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:14.800376 | orchestrator | 2026-03-07 00:46:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:17.858873 | orchestrator | 2026-03-07 00:46:17 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:17.860829 | orchestrator | 2026-03-07 00:46:17 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:17.861452 | orchestrator | 2026-03-07 00:46:17 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:17.863521 | orchestrator | 2026-03-07 00:46:17 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:17.866340 | orchestrator | 2026-03-07 00:46:17 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:17.867302 | orchestrator | 2026-03-07 00:46:17 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state STARTED 2026-03-07 00:46:17.871188 | orchestrator | 2026-03-07 00:46:17 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:17.871286 | orchestrator | 2026-03-07 00:46:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:21.063878 | orchestrator | 2026-03-07 00:46:21 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:21.064005 | orchestrator | 2026-03-07 00:46:21 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:21.070322 | orchestrator | 2026-03-07 00:46:21 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:21.070695 | orchestrator | 2026-03-07 00:46:21 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:21.071460 | orchestrator | 2026-03-07 00:46:21 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:21.072114 | orchestrator | 2026-03-07 00:46:21 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state STARTED 2026-03-07 00:46:21.072929 | orchestrator | 2026-03-07 00:46:21 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:21.072956 | orchestrator | 2026-03-07 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:24.136069 | orchestrator | 2026-03-07 00:46:24 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:24.136182 | orchestrator | 2026-03-07 00:46:24 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:24.138249 | orchestrator | 2026-03-07 00:46:24 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:24.139109 | orchestrator | 2026-03-07 00:46:24 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:24.139668 | orchestrator | 2026-03-07 00:46:24 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:24.140470 | orchestrator | 2026-03-07 00:46:24 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state STARTED 2026-03-07 00:46:24.142545 | orchestrator | 2026-03-07 00:46:24 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:24.142593 | orchestrator | 2026-03-07 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:27.228592 | orchestrator | 2026-03-07 00:46:27 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:27.231750 | orchestrator | 2026-03-07 00:46:27 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:27.232334 | orchestrator | 2026-03-07 00:46:27 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:27.237697 | orchestrator | 2026-03-07 00:46:27 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:27.237787 | orchestrator | 2026-03-07 00:46:27 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:27.238497 | orchestrator | 2026-03-07 00:46:27 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state STARTED 2026-03-07 00:46:27.243812 | orchestrator | 2026-03-07 00:46:27 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:27.243875 | orchestrator | 2026-03-07 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:30.309183 | orchestrator | 2026-03-07 00:46:30 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:30.309646 | orchestrator | 2026-03-07 00:46:30 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:30.310580 | orchestrator | 2026-03-07 00:46:30 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:30.314900 | orchestrator | 2026-03-07 00:46:30 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:30.314961 | orchestrator | 2026-03-07 00:46:30 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:30.314973 | orchestrator | 2026-03-07 00:46:30 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state STARTED 2026-03-07 00:46:30.314984 | orchestrator | 2026-03-07 00:46:30 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:30.314996 | orchestrator | 2026-03-07 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:33.385457 | orchestrator | 2026-03-07 00:46:33 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:33.386157 | orchestrator | 2026-03-07 00:46:33 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:33.389107 | orchestrator | 2026-03-07 00:46:33 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:33.390149 | orchestrator | 2026-03-07 00:46:33 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:33.393102 | orchestrator | 2026-03-07 00:46:33 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:33.393141 | orchestrator | 2026-03-07 00:46:33 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state STARTED 2026-03-07 00:46:33.393150 | orchestrator | 2026-03-07 00:46:33 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:33.393160 | orchestrator | 2026-03-07 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:36.504602 | orchestrator | 2026-03-07 00:46:36 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:36.510528 | orchestrator | 2026-03-07 00:46:36 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:36.511274 | orchestrator | 2026-03-07 00:46:36 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:36.518597 | orchestrator | 2026-03-07 00:46:36 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:36.522483 | orchestrator | 2026-03-07 00:46:36 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:36.523887 | orchestrator | 2026-03-07 00:46:36 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state STARTED 2026-03-07 00:46:36.525172 | orchestrator | 2026-03-07 00:46:36 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:36.525511 | orchestrator | 2026-03-07 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:39.679736 | orchestrator | 2026-03-07 00:46:39 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:39.679868 | orchestrator | 2026-03-07 00:46:39 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:39.679892 | orchestrator | 2026-03-07 00:46:39 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:39.679913 | orchestrator | 2026-03-07 00:46:39 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:39.684416 | orchestrator | 2026-03-07 00:46:39 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:39.685669 | orchestrator | 2026-03-07 00:46:39 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state STARTED 2026-03-07 00:46:39.689454 | orchestrator | 2026-03-07 00:46:39 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:39.689517 | orchestrator | 2026-03-07 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:43.119080 | orchestrator | 2026-03-07 00:46:42 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:43.119191 | orchestrator | 2026-03-07 00:46:42 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:43.119206 | orchestrator | 2026-03-07 00:46:42 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:43.119218 | orchestrator | 2026-03-07 00:46:42 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:43.119230 | orchestrator | 2026-03-07 00:46:42 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:43.119241 | orchestrator | 2026-03-07 00:46:42 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state STARTED 2026-03-07 00:46:43.119252 | orchestrator | 2026-03-07 00:46:42 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:43.119264 | orchestrator | 2026-03-07 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:46.007272 | orchestrator | 2026-03-07 00:46:46 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:46.011290 | orchestrator | 2026-03-07 00:46:46 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:46.016956 | orchestrator | 2026-03-07 00:46:46 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:46.019266 | orchestrator | 2026-03-07 00:46:46 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:46.025251 | orchestrator | 2026-03-07 00:46:46 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:46.027530 | orchestrator | 2026-03-07 00:46:46 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state STARTED 2026-03-07 00:46:46.030747 | orchestrator | 2026-03-07 00:46:46 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:46.030784 | orchestrator | 2026-03-07 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:49.141790 | orchestrator | 2026-03-07 00:46:49 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:49.158969 | orchestrator | 2026-03-07 00:46:49 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:49.172994 | orchestrator | 2026-03-07 00:46:49 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:46:49.190896 | orchestrator | 2026-03-07 00:46:49 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:49.191005 | orchestrator | 2026-03-07 00:46:49 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:49.198597 | orchestrator | 2026-03-07 00:46:49 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:49.202329 | orchestrator | 2026-03-07 00:46:49.202462 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-07 00:46:49.202486 | orchestrator | 2026-03-07 00:46:49.202502 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-07 00:46:49.202517 | orchestrator | Saturday 07 March 2026 00:46:27 +0000 (0:00:01.136) 0:00:01.136 ******** 2026-03-07 00:46:49.202532 | orchestrator | changed: [testbed-manager] 2026-03-07 00:46:49.202548 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:46:49.202562 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:46:49.202576 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:46:49.202602 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:46:49.202617 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:46:49.202632 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:46:49.202642 | orchestrator | 2026-03-07 00:46:49.202650 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-07 00:46:49.202659 | orchestrator | Saturday 07 March 2026 00:46:33 +0000 (0:00:05.415) 0:00:06.552 ******** 2026-03-07 00:46:49.202668 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-07 00:46:49.202677 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-07 00:46:49.202685 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-07 00:46:49.202693 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-07 00:46:49.202700 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-07 00:46:49.202708 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-07 00:46:49.202716 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-07 00:46:49.202724 | orchestrator | 2026-03-07 00:46:49.202732 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-07 00:46:49.202740 | orchestrator | Saturday 07 March 2026 00:46:35 +0000 (0:00:02.272) 0:00:08.824 ******** 2026-03-07 00:46:49.202752 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:35.084373', 'end': '2026-03-07 00:46:35.091719', 'delta': '0:00:00.007346', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:46:49.202768 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:34.933100', 'end': '2026-03-07 00:46:34.941792', 'delta': '0:00:00.008692', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:46:49.202777 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:35.002104', 'end': '2026-03-07 00:46:35.010483', 'delta': '0:00:00.008379', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:46:49.202838 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:35.103266', 'end': '2026-03-07 00:46:35.111555', 'delta': '0:00:00.008289', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:46:49.202847 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:35.329636', 'end': '2026-03-07 00:46:35.337119', 'delta': '0:00:00.007483', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:46:49.202856 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:35.175071', 'end': '2026-03-07 00:46:35.183165', 'delta': '0:00:00.008094', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:46:49.202864 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:35.512615', 'end': '2026-03-07 00:46:35.522388', 'delta': '0:00:00.009773', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:46:49.202879 | orchestrator | 2026-03-07 00:46:49.202887 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-07 00:46:49.202895 | orchestrator | Saturday 07 March 2026 00:46:38 +0000 (0:00:02.858) 0:00:11.682 ******** 2026-03-07 00:46:49.202903 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-07 00:46:49.202911 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-07 00:46:49.202919 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-07 00:46:49.202927 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-07 00:46:49.202935 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-07 00:46:49.202943 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-07 00:46:49.202951 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-07 00:46:49.202959 | orchestrator | 2026-03-07 00:46:49.202967 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-07 00:46:49.202975 | orchestrator | Saturday 07 March 2026 00:46:41 +0000 (0:00:02.680) 0:00:14.363 ******** 2026-03-07 00:46:49.202983 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-07 00:46:49.202991 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-07 00:46:49.202999 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-07 00:46:49.203007 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-07 00:46:49.203015 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-07 00:46:49.203022 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-07 00:46:49.203030 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-07 00:46:49.203044 | orchestrator | 2026-03-07 00:46:49.203057 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:46:49.203079 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:46:49.203095 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:46:49.203113 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:46:49.203126 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:46:49.203140 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:46:49.203153 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:46:49.203166 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:46:49.203180 | orchestrator | 2026-03-07 00:46:49.203194 | orchestrator | 2026-03-07 00:46:49.203208 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:46:49.203222 | orchestrator | Saturday 07 March 2026 00:46:47 +0000 (0:00:06.006) 0:00:20.369 ******** 2026-03-07 00:46:49.203230 | orchestrator | =============================================================================== 2026-03-07 00:46:49.203238 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 6.01s 2026-03-07 00:46:49.203246 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.42s 2026-03-07 00:46:49.203253 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.86s 2026-03-07 00:46:49.203261 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.68s 2026-03-07 00:46:49.203269 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.27s 2026-03-07 00:46:49.203284 | orchestrator | 2026-03-07 00:46:49 | INFO  | Task 405d0853-c27c-4529-9e1c-80f38f2e0765 is in state SUCCESS 2026-03-07 00:46:49.206355 | orchestrator | 2026-03-07 00:46:49 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:49.206514 | orchestrator | 2026-03-07 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:52.821782 | orchestrator | 2026-03-07 00:46:52 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:52.822184 | orchestrator | 2026-03-07 00:46:52 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:52.823668 | orchestrator | 2026-03-07 00:46:52 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:46:52.826579 | orchestrator | 2026-03-07 00:46:52 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:52.830988 | orchestrator | 2026-03-07 00:46:52 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:52.834736 | orchestrator | 2026-03-07 00:46:52 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:52.839851 | orchestrator | 2026-03-07 00:46:52 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:52.839922 | orchestrator | 2026-03-07 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:55.915222 | orchestrator | 2026-03-07 00:46:55 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:55.916675 | orchestrator | 2026-03-07 00:46:55 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:55.920038 | orchestrator | 2026-03-07 00:46:55 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:46:55.922696 | orchestrator | 2026-03-07 00:46:55 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:55.924661 | orchestrator | 2026-03-07 00:46:55 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:55.925973 | orchestrator | 2026-03-07 00:46:55 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:55.930581 | orchestrator | 2026-03-07 00:46:55 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:55.930687 | orchestrator | 2026-03-07 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:59.041231 | orchestrator | 2026-03-07 00:46:59 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:46:59.042060 | orchestrator | 2026-03-07 00:46:59 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:46:59.043108 | orchestrator | 2026-03-07 00:46:59 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:46:59.044220 | orchestrator | 2026-03-07 00:46:59 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:46:59.045252 | orchestrator | 2026-03-07 00:46:59 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:46:59.046719 | orchestrator | 2026-03-07 00:46:59 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:46:59.051091 | orchestrator | 2026-03-07 00:46:59 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:46:59.051132 | orchestrator | 2026-03-07 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:02.547045 | orchestrator | 2026-03-07 00:47:02 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:02.547177 | orchestrator | 2026-03-07 00:47:02 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:02.547774 | orchestrator | 2026-03-07 00:47:02 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:02.550725 | orchestrator | 2026-03-07 00:47:02 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:47:02.551058 | orchestrator | 2026-03-07 00:47:02 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:02.552174 | orchestrator | 2026-03-07 00:47:02 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:02.554287 | orchestrator | 2026-03-07 00:47:02 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:47:02.554355 | orchestrator | 2026-03-07 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:05.828257 | orchestrator | 2026-03-07 00:47:05 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:05.833191 | orchestrator | 2026-03-07 00:47:05 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:05.836167 | orchestrator | 2026-03-07 00:47:05 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:05.841028 | orchestrator | 2026-03-07 00:47:05 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:47:05.845227 | orchestrator | 2026-03-07 00:47:05 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:05.845945 | orchestrator | 2026-03-07 00:47:05 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:05.854342 | orchestrator | 2026-03-07 00:47:05 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:47:05.854438 | orchestrator | 2026-03-07 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:09.071186 | orchestrator | 2026-03-07 00:47:09 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:09.071295 | orchestrator | 2026-03-07 00:47:09 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:09.071310 | orchestrator | 2026-03-07 00:47:09 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:09.071322 | orchestrator | 2026-03-07 00:47:09 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:47:09.071334 | orchestrator | 2026-03-07 00:47:09 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:09.071742 | orchestrator | 2026-03-07 00:47:09 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:09.071763 | orchestrator | 2026-03-07 00:47:09 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:47:09.071777 | orchestrator | 2026-03-07 00:47:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:12.288408 | orchestrator | 2026-03-07 00:47:12 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:12.289709 | orchestrator | 2026-03-07 00:47:12 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:12.289792 | orchestrator | 2026-03-07 00:47:12 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:12.289806 | orchestrator | 2026-03-07 00:47:12 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:47:12.289817 | orchestrator | 2026-03-07 00:47:12 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:12.296387 | orchestrator | 2026-03-07 00:47:12 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:12.297003 | orchestrator | 2026-03-07 00:47:12 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:47:12.297174 | orchestrator | 2026-03-07 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:15.606730 | orchestrator | 2026-03-07 00:47:15 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:15.606826 | orchestrator | 2026-03-07 00:47:15 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:15.606839 | orchestrator | 2026-03-07 00:47:15 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:15.606848 | orchestrator | 2026-03-07 00:47:15 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:47:15.606857 | orchestrator | 2026-03-07 00:47:15 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:15.606865 | orchestrator | 2026-03-07 00:47:15 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:15.606874 | orchestrator | 2026-03-07 00:47:15 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state STARTED 2026-03-07 00:47:15.606883 | orchestrator | 2026-03-07 00:47:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:18.768083 | orchestrator | 2026-03-07 00:47:18 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:18.768174 | orchestrator | 2026-03-07 00:47:18 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:18.768183 | orchestrator | 2026-03-07 00:47:18 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:18.768190 | orchestrator | 2026-03-07 00:47:18 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:47:18.768213 | orchestrator | 2026-03-07 00:47:18 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:18.768219 | orchestrator | 2026-03-07 00:47:18 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:18.768225 | orchestrator | 2026-03-07 00:47:18 | INFO  | Task 30f5c203-3bf2-42b5-a9e0-f9fb23c45f19 is in state SUCCESS 2026-03-07 00:47:18.768232 | orchestrator | 2026-03-07 00:47:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:21.589058 | orchestrator | 2026-03-07 00:47:21 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:21.589150 | orchestrator | 2026-03-07 00:47:21 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:21.589161 | orchestrator | 2026-03-07 00:47:21 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:21.589171 | orchestrator | 2026-03-07 00:47:21 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:47:21.836247 | orchestrator | 2026-03-07 00:47:21 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:21.854919 | orchestrator | 2026-03-07 00:47:21 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:21.855013 | orchestrator | 2026-03-07 00:47:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:24.937926 | orchestrator | 2026-03-07 00:47:24 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:24.941308 | orchestrator | 2026-03-07 00:47:24 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:24.942704 | orchestrator | 2026-03-07 00:47:24 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:24.944946 | orchestrator | 2026-03-07 00:47:24 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:47:24.946195 | orchestrator | 2026-03-07 00:47:24 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:24.946946 | orchestrator | 2026-03-07 00:47:24 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:24.946982 | orchestrator | 2026-03-07 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:28.037989 | orchestrator | 2026-03-07 00:47:28 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:28.040044 | orchestrator | 2026-03-07 00:47:28 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:28.048047 | orchestrator | 2026-03-07 00:47:28 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:28.048876 | orchestrator | 2026-03-07 00:47:28 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state STARTED 2026-03-07 00:47:28.051120 | orchestrator | 2026-03-07 00:47:28 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:28.053612 | orchestrator | 2026-03-07 00:47:28 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:28.054559 | orchestrator | 2026-03-07 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:31.111134 | orchestrator | 2026-03-07 00:47:31 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:31.117243 | orchestrator | 2026-03-07 00:47:31 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:31.117346 | orchestrator | 2026-03-07 00:47:31 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:31.117360 | orchestrator | 2026-03-07 00:47:31 | INFO  | Task 7669cfb2-2a7e-4ef2-989a-fd2209d183c5 is in state SUCCESS 2026-03-07 00:47:31.121489 | orchestrator | 2026-03-07 00:47:31 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:31.121566 | orchestrator | 2026-03-07 00:47:31 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:31.121577 | orchestrator | 2026-03-07 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:34.180306 | orchestrator | 2026-03-07 00:47:34 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:34.183146 | orchestrator | 2026-03-07 00:47:34 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:34.185229 | orchestrator | 2026-03-07 00:47:34 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:34.187765 | orchestrator | 2026-03-07 00:47:34 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:34.191296 | orchestrator | 2026-03-07 00:47:34 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:34.191361 | orchestrator | 2026-03-07 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:37.241657 | orchestrator | 2026-03-07 00:47:37 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:37.245334 | orchestrator | 2026-03-07 00:47:37 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:37.247799 | orchestrator | 2026-03-07 00:47:37 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:37.249539 | orchestrator | 2026-03-07 00:47:37 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:37.251398 | orchestrator | 2026-03-07 00:47:37 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:37.251565 | orchestrator | 2026-03-07 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:40.316550 | orchestrator | 2026-03-07 00:47:40 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:40.316871 | orchestrator | 2026-03-07 00:47:40 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:40.321085 | orchestrator | 2026-03-07 00:47:40 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:40.322001 | orchestrator | 2026-03-07 00:47:40 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:40.323503 | orchestrator | 2026-03-07 00:47:40 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:40.323530 | orchestrator | 2026-03-07 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:43.385988 | orchestrator | 2026-03-07 00:47:43 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:43.388130 | orchestrator | 2026-03-07 00:47:43 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:43.388206 | orchestrator | 2026-03-07 00:47:43 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:43.388216 | orchestrator | 2026-03-07 00:47:43 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:43.388224 | orchestrator | 2026-03-07 00:47:43 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:43.388233 | orchestrator | 2026-03-07 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:46.452024 | orchestrator | 2026-03-07 00:47:46 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:46.471063 | orchestrator | 2026-03-07 00:47:46 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:46.471185 | orchestrator | 2026-03-07 00:47:46 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:46.471210 | orchestrator | 2026-03-07 00:47:46 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:46.471230 | orchestrator | 2026-03-07 00:47:46 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:46.471249 | orchestrator | 2026-03-07 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:49.534757 | orchestrator | 2026-03-07 00:47:49 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:49.538958 | orchestrator | 2026-03-07 00:47:49 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:49.539558 | orchestrator | 2026-03-07 00:47:49 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:49.540832 | orchestrator | 2026-03-07 00:47:49 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:49.541824 | orchestrator | 2026-03-07 00:47:49 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:49.541857 | orchestrator | 2026-03-07 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:52.593427 | orchestrator | 2026-03-07 00:47:52 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:52.594080 | orchestrator | 2026-03-07 00:47:52 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:52.610143 | orchestrator | 2026-03-07 00:47:52 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:52.614427 | orchestrator | 2026-03-07 00:47:52 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:52.617944 | orchestrator | 2026-03-07 00:47:52 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:52.619188 | orchestrator | 2026-03-07 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:55.682479 | orchestrator | 2026-03-07 00:47:55 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:55.684530 | orchestrator | 2026-03-07 00:47:55 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:55.690074 | orchestrator | 2026-03-07 00:47:55 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:55.692266 | orchestrator | 2026-03-07 00:47:55 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:55.695409 | orchestrator | 2026-03-07 00:47:55 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:55.696166 | orchestrator | 2026-03-07 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:58.740668 | orchestrator | 2026-03-07 00:47:58 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:47:58.741615 | orchestrator | 2026-03-07 00:47:58 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:47:58.743025 | orchestrator | 2026-03-07 00:47:58 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:47:58.744412 | orchestrator | 2026-03-07 00:47:58 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:47:58.748142 | orchestrator | 2026-03-07 00:47:58 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:47:58.748283 | orchestrator | 2026-03-07 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:01.832808 | orchestrator | 2026-03-07 00:48:01 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:01.844264 | orchestrator | 2026-03-07 00:48:01 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:48:01.848475 | orchestrator | 2026-03-07 00:48:01 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:48:01.852514 | orchestrator | 2026-03-07 00:48:01 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:01.855090 | orchestrator | 2026-03-07 00:48:01 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:01.857085 | orchestrator | 2026-03-07 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:04.960702 | orchestrator | 2026-03-07 00:48:04 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:04.980605 | orchestrator | 2026-03-07 00:48:04 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:48:05.000283 | orchestrator | 2026-03-07 00:48:04 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:48:05.000360 | orchestrator | 2026-03-07 00:48:04 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:05.025652 | orchestrator | 2026-03-07 00:48:05 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:05.025817 | orchestrator | 2026-03-07 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:08.075147 | orchestrator | 2026-03-07 00:48:08 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:08.076994 | orchestrator | 2026-03-07 00:48:08 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:48:08.077977 | orchestrator | 2026-03-07 00:48:08 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state STARTED 2026-03-07 00:48:08.079026 | orchestrator | 2026-03-07 00:48:08 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:08.079975 | orchestrator | 2026-03-07 00:48:08 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:08.080019 | orchestrator | 2026-03-07 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:11.128598 | orchestrator | 2026-03-07 00:48:11 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:11.129533 | orchestrator | 2026-03-07 00:48:11 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:48:11.133445 | orchestrator | 2026-03-07 00:48:11.133511 | orchestrator | 2026-03-07 00:48:11.133530 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-07 00:48:11.133536 | orchestrator | 2026-03-07 00:48:11.133540 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-07 00:48:11.133545 | orchestrator | Saturday 07 March 2026 00:46:28 +0000 (0:00:00.687) 0:00:00.687 ******** 2026-03-07 00:48:11.133549 | orchestrator | ok: [testbed-manager] => { 2026-03-07 00:48:11.133555 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-07 00:48:11.133560 | orchestrator | } 2026-03-07 00:48:11.133565 | orchestrator | 2026-03-07 00:48:11.133569 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-07 00:48:11.133573 | orchestrator | Saturday 07 March 2026 00:46:29 +0000 (0:00:00.703) 0:00:01.390 ******** 2026-03-07 00:48:11.133576 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:11.133581 | orchestrator | 2026-03-07 00:48:11.133585 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-07 00:48:11.133589 | orchestrator | Saturday 07 March 2026 00:46:31 +0000 (0:00:01.793) 0:00:03.183 ******** 2026-03-07 00:48:11.133593 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-07 00:48:11.133597 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-07 00:48:11.133601 | orchestrator | 2026-03-07 00:48:11.133605 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-07 00:48:11.133609 | orchestrator | Saturday 07 March 2026 00:46:34 +0000 (0:00:03.167) 0:00:06.351 ******** 2026-03-07 00:48:11.133613 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:11.133616 | orchestrator | 2026-03-07 00:48:11.133620 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-07 00:48:11.133624 | orchestrator | Saturday 07 March 2026 00:46:39 +0000 (0:00:04.989) 0:00:11.341 ******** 2026-03-07 00:48:11.133628 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:11.133631 | orchestrator | 2026-03-07 00:48:11.133635 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-07 00:48:11.133639 | orchestrator | Saturday 07 March 2026 00:46:42 +0000 (0:00:02.663) 0:00:14.005 ******** 2026-03-07 00:48:11.133643 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-07 00:48:11.133646 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:11.133650 | orchestrator | 2026-03-07 00:48:11.133654 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-07 00:48:11.133658 | orchestrator | Saturday 07 March 2026 00:47:09 +0000 (0:00:27.674) 0:00:41.680 ******** 2026-03-07 00:48:11.133661 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:11.133682 | orchestrator | 2026-03-07 00:48:11.133686 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:48:11.133690 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:11.133695 | orchestrator | 2026-03-07 00:48:11.133699 | orchestrator | 2026-03-07 00:48:11.133702 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:48:11.133706 | orchestrator | Saturday 07 March 2026 00:47:15 +0000 (0:00:05.586) 0:00:47.266 ******** 2026-03-07 00:48:11.133710 | orchestrator | =============================================================================== 2026-03-07 00:48:11.133714 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.67s 2026-03-07 00:48:11.133717 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.59s 2026-03-07 00:48:11.133721 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.99s 2026-03-07 00:48:11.133725 | orchestrator | osism.services.homer : Create required directories ---------------------- 3.17s 2026-03-07 00:48:11.133728 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.67s 2026-03-07 00:48:11.133732 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.79s 2026-03-07 00:48:11.133736 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.70s 2026-03-07 00:48:11.133739 | orchestrator | 2026-03-07 00:48:11.133743 | orchestrator | 2026-03-07 00:48:11.133783 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-07 00:48:11.133787 | orchestrator | 2026-03-07 00:48:11.133791 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-07 00:48:11.133795 | orchestrator | Saturday 07 March 2026 00:46:31 +0000 (0:00:01.091) 0:00:01.091 ******** 2026-03-07 00:48:11.133799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-07 00:48:11.133804 | orchestrator | 2026-03-07 00:48:11.133808 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-07 00:48:11.133811 | orchestrator | Saturday 07 March 2026 00:46:31 +0000 (0:00:00.257) 0:00:01.349 ******** 2026-03-07 00:48:11.133815 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-07 00:48:11.133819 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-07 00:48:11.133823 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-07 00:48:11.133827 | orchestrator | 2026-03-07 00:48:11.133830 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-07 00:48:11.133834 | orchestrator | Saturday 07 March 2026 00:46:34 +0000 (0:00:02.726) 0:00:04.075 ******** 2026-03-07 00:48:11.133838 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:11.133842 | orchestrator | 2026-03-07 00:48:11.133845 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-07 00:48:11.133849 | orchestrator | Saturday 07 March 2026 00:46:38 +0000 (0:00:03.723) 0:00:07.799 ******** 2026-03-07 00:48:11.133867 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-07 00:48:11.133871 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:11.133875 | orchestrator | 2026-03-07 00:48:11.133879 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-07 00:48:11.133882 | orchestrator | Saturday 07 March 2026 00:47:16 +0000 (0:00:38.023) 0:00:45.822 ******** 2026-03-07 00:48:11.133886 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:11.133890 | orchestrator | 2026-03-07 00:48:11.133894 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-07 00:48:11.133897 | orchestrator | Saturday 07 March 2026 00:47:19 +0000 (0:00:03.193) 0:00:49.015 ******** 2026-03-07 00:48:11.133901 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:11.133905 | orchestrator | 2026-03-07 00:48:11.133909 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-07 00:48:11.133916 | orchestrator | Saturday 07 March 2026 00:47:22 +0000 (0:00:02.769) 0:00:51.784 ******** 2026-03-07 00:48:11.133920 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:11.133924 | orchestrator | 2026-03-07 00:48:11.133928 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-07 00:48:11.133931 | orchestrator | Saturday 07 March 2026 00:47:25 +0000 (0:00:03.500) 0:00:55.285 ******** 2026-03-07 00:48:11.133935 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:11.133939 | orchestrator | 2026-03-07 00:48:11.133943 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-07 00:48:11.133947 | orchestrator | Saturday 07 March 2026 00:47:28 +0000 (0:00:02.721) 0:00:58.007 ******** 2026-03-07 00:48:11.133950 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:11.133954 | orchestrator | 2026-03-07 00:48:11.133958 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-07 00:48:11.133962 | orchestrator | Saturday 07 March 2026 00:47:29 +0000 (0:00:00.964) 0:00:58.971 ******** 2026-03-07 00:48:11.133965 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:11.133969 | orchestrator | 2026-03-07 00:48:11.133973 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:48:11.133977 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:11.133980 | orchestrator | 2026-03-07 00:48:11.133984 | orchestrator | 2026-03-07 00:48:11.133988 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:48:11.133992 | orchestrator | Saturday 07 March 2026 00:47:30 +0000 (0:00:00.879) 0:00:59.850 ******** 2026-03-07 00:48:11.133995 | orchestrator | =============================================================================== 2026-03-07 00:48:11.133999 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 38.02s 2026-03-07 00:48:11.134003 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.72s 2026-03-07 00:48:11.134007 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.50s 2026-03-07 00:48:11.134011 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 3.19s 2026-03-07 00:48:11.134053 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.77s 2026-03-07 00:48:11.134061 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.73s 2026-03-07 00:48:11.134068 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.72s 2026-03-07 00:48:11.134075 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.96s 2026-03-07 00:48:11.134082 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.88s 2026-03-07 00:48:11.134089 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.26s 2026-03-07 00:48:11.134096 | orchestrator | 2026-03-07 00:48:11.134103 | orchestrator | 2026-03-07 00:48:11.134108 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-07 00:48:11.134112 | orchestrator | 2026-03-07 00:48:11.134117 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-07 00:48:11.134121 | orchestrator | Saturday 07 March 2026 00:46:54 +0000 (0:00:00.373) 0:00:00.373 ******** 2026-03-07 00:48:11.134125 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:11.134130 | orchestrator | 2026-03-07 00:48:11.134134 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-07 00:48:11.134138 | orchestrator | Saturday 07 March 2026 00:46:56 +0000 (0:00:01.396) 0:00:01.770 ******** 2026-03-07 00:48:11.134143 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-07 00:48:11.134147 | orchestrator | 2026-03-07 00:48:11.134151 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-07 00:48:11.134157 | orchestrator | Saturday 07 March 2026 00:46:56 +0000 (0:00:00.675) 0:00:02.446 ******** 2026-03-07 00:48:11.134169 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:11.134179 | orchestrator | 2026-03-07 00:48:11.134186 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-07 00:48:11.134192 | orchestrator | Saturday 07 March 2026 00:46:58 +0000 (0:00:01.823) 0:00:04.269 ******** 2026-03-07 00:48:11.134198 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-07 00:48:11.134204 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:11.134210 | orchestrator | 2026-03-07 00:48:11.134217 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-07 00:48:11.134224 | orchestrator | Saturday 07 March 2026 00:48:02 +0000 (0:01:03.876) 0:01:08.145 ******** 2026-03-07 00:48:11.134231 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:11.134236 | orchestrator | 2026-03-07 00:48:11.134240 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:48:11.134245 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:11.134249 | orchestrator | 2026-03-07 00:48:11.134254 | orchestrator | 2026-03-07 00:48:11.134258 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:48:11.134267 | orchestrator | Saturday 07 March 2026 00:48:08 +0000 (0:00:06.386) 0:01:14.532 ******** 2026-03-07 00:48:11.134272 | orchestrator | =============================================================================== 2026-03-07 00:48:11.134277 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 63.88s 2026-03-07 00:48:11.134282 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.39s 2026-03-07 00:48:11.134286 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.82s 2026-03-07 00:48:11.134290 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.40s 2026-03-07 00:48:11.134295 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.68s 2026-03-07 00:48:11.134300 | orchestrator | 2026-03-07 00:48:11 | INFO  | Task bb4ab2a1-8751-4f3b-b1c3-8c9cde82924f is in state SUCCESS 2026-03-07 00:48:11.134305 | orchestrator | 2026-03-07 00:48:11 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:11.135719 | orchestrator | 2026-03-07 00:48:11 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:11.137145 | orchestrator | 2026-03-07 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:14.178813 | orchestrator | 2026-03-07 00:48:14 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:14.180754 | orchestrator | 2026-03-07 00:48:14 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:48:14.182633 | orchestrator | 2026-03-07 00:48:14 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:14.183930 | orchestrator | 2026-03-07 00:48:14 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:14.184026 | orchestrator | 2026-03-07 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:17.229342 | orchestrator | 2026-03-07 00:48:17 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:17.233884 | orchestrator | 2026-03-07 00:48:17 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:48:17.237804 | orchestrator | 2026-03-07 00:48:17 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:17.240588 | orchestrator | 2026-03-07 00:48:17 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:17.240618 | orchestrator | 2026-03-07 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:20.296069 | orchestrator | 2026-03-07 00:48:20 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:20.297805 | orchestrator | 2026-03-07 00:48:20 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:48:20.302254 | orchestrator | 2026-03-07 00:48:20 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:20.306135 | orchestrator | 2026-03-07 00:48:20 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:20.308005 | orchestrator | 2026-03-07 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:23.386445 | orchestrator | 2026-03-07 00:48:23 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:23.390924 | orchestrator | 2026-03-07 00:48:23 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state STARTED 2026-03-07 00:48:23.394482 | orchestrator | 2026-03-07 00:48:23 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:23.395027 | orchestrator | 2026-03-07 00:48:23 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:23.395103 | orchestrator | 2026-03-07 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:26.456313 | orchestrator | 2026-03-07 00:48:26 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:26.460724 | orchestrator | 2026-03-07 00:48:26.460878 | orchestrator | 2026-03-07 00:48:26.460895 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:48:26.460909 | orchestrator | 2026-03-07 00:48:26.460920 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:48:26.460932 | orchestrator | Saturday 07 March 2026 00:46:27 +0000 (0:00:01.226) 0:00:01.226 ******** 2026-03-07 00:48:26.460944 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-07 00:48:26.460955 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-07 00:48:26.460965 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-07 00:48:26.460976 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-07 00:48:26.460987 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-07 00:48:26.460999 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-07 00:48:26.461026 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-07 00:48:26.461038 | orchestrator | 2026-03-07 00:48:26.461049 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-07 00:48:26.461060 | orchestrator | 2026-03-07 00:48:26.461071 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-07 00:48:26.461082 | orchestrator | Saturday 07 March 2026 00:46:30 +0000 (0:00:03.375) 0:00:04.601 ******** 2026-03-07 00:48:26.461108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:48:26.461121 | orchestrator | 2026-03-07 00:48:26.461132 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-07 00:48:26.461143 | orchestrator | Saturday 07 March 2026 00:46:32 +0000 (0:00:01.584) 0:00:06.186 ******** 2026-03-07 00:48:26.461154 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:48:26.461166 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:48:26.461177 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:48:26.461188 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:48:26.461251 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:26.461265 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:48:26.461278 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:48:26.461290 | orchestrator | 2026-03-07 00:48:26.461303 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-07 00:48:26.461368 | orchestrator | Saturday 07 March 2026 00:46:35 +0000 (0:00:03.009) 0:00:09.195 ******** 2026-03-07 00:48:26.461382 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:48:26.461395 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:26.461407 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:48:26.461419 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:48:26.461431 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:48:26.461444 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:48:26.461456 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:48:26.461469 | orchestrator | 2026-03-07 00:48:26.461481 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-07 00:48:26.461494 | orchestrator | Saturday 07 March 2026 00:46:39 +0000 (0:00:04.329) 0:00:13.525 ******** 2026-03-07 00:48:26.461507 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:48:26.461520 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:48:26.461539 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:48:26.461559 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:48:26.461580 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:26.461599 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:48:26.461617 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:48:26.461635 | orchestrator | 2026-03-07 00:48:26.461654 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-07 00:48:26.461671 | orchestrator | Saturday 07 March 2026 00:46:43 +0000 (0:00:03.323) 0:00:16.849 ******** 2026-03-07 00:48:26.461689 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:48:26.461706 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:48:26.461723 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:48:26.461742 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:48:26.461758 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:48:26.461775 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:48:26.461794 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:26.461949 | orchestrator | 2026-03-07 00:48:26.461973 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-07 00:48:26.461991 | orchestrator | Saturday 07 March 2026 00:47:01 +0000 (0:00:17.785) 0:00:34.634 ******** 2026-03-07 00:48:26.462002 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:48:26.462076 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:48:26.462104 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:48:26.462124 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:48:26.462146 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:48:26.462166 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:48:26.462185 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:26.462204 | orchestrator | 2026-03-07 00:48:26.462216 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-07 00:48:26.462226 | orchestrator | Saturday 07 March 2026 00:47:50 +0000 (0:00:49.069) 0:01:23.704 ******** 2026-03-07 00:48:26.462239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:48:26.462252 | orchestrator | 2026-03-07 00:48:26.462263 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-07 00:48:26.462280 | orchestrator | Saturday 07 March 2026 00:47:51 +0000 (0:00:01.458) 0:01:25.162 ******** 2026-03-07 00:48:26.462298 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-07 00:48:26.462316 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-07 00:48:26.462333 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-07 00:48:26.462349 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-07 00:48:26.462396 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-07 00:48:26.462418 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-07 00:48:26.462436 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-07 00:48:26.462473 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-07 00:48:26.462489 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-07 00:48:26.462505 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-07 00:48:26.462520 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-07 00:48:26.462536 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-07 00:48:26.462552 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-07 00:48:26.462568 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-07 00:48:26.462583 | orchestrator | 2026-03-07 00:48:26.462600 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-07 00:48:26.462621 | orchestrator | Saturday 07 March 2026 00:47:58 +0000 (0:00:06.583) 0:01:31.746 ******** 2026-03-07 00:48:26.462631 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:26.462641 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:48:26.462651 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:48:26.462660 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:48:26.462670 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:48:26.462679 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:48:26.462689 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:48:26.462698 | orchestrator | 2026-03-07 00:48:26.462708 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-07 00:48:26.462717 | orchestrator | Saturday 07 March 2026 00:47:59 +0000 (0:00:01.819) 0:01:33.565 ******** 2026-03-07 00:48:26.462727 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:48:26.462736 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:48:26.462746 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:26.462755 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:48:26.462765 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:48:26.462775 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:48:26.462784 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:48:26.462794 | orchestrator | 2026-03-07 00:48:26.462804 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-07 00:48:26.462849 | orchestrator | Saturday 07 March 2026 00:48:02 +0000 (0:00:02.351) 0:01:35.917 ******** 2026-03-07 00:48:26.462861 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:48:26.462870 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:48:26.462880 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:48:26.462890 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:26.462899 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:48:26.462909 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:48:26.462918 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:48:26.462927 | orchestrator | 2026-03-07 00:48:26.462937 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-07 00:48:26.462946 | orchestrator | Saturday 07 March 2026 00:48:04 +0000 (0:00:02.013) 0:01:37.930 ******** 2026-03-07 00:48:26.462956 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:26.462965 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:48:26.462974 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:48:26.462984 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:48:26.462994 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:48:26.463003 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:48:26.463012 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:48:26.463022 | orchestrator | 2026-03-07 00:48:26.463031 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-07 00:48:26.463041 | orchestrator | Saturday 07 March 2026 00:48:07 +0000 (0:00:03.054) 0:01:40.985 ******** 2026-03-07 00:48:26.463051 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-07 00:48:26.463065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:48:26.463085 | orchestrator | 2026-03-07 00:48:26.463095 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-07 00:48:26.463104 | orchestrator | Saturday 07 March 2026 00:48:10 +0000 (0:00:02.831) 0:01:43.817 ******** 2026-03-07 00:48:26.463113 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:26.463123 | orchestrator | 2026-03-07 00:48:26.463133 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-07 00:48:26.463143 | orchestrator | Saturday 07 March 2026 00:48:12 +0000 (0:00:02.526) 0:01:46.344 ******** 2026-03-07 00:48:26.463152 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:48:26.463161 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:48:26.463171 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:48:26.463182 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:48:26.463199 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:48:26.463215 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:48:26.463231 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:26.463247 | orchestrator | 2026-03-07 00:48:26.463264 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:48:26.463278 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:26.463290 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:26.463300 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:26.463310 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:26.463332 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:26.463342 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:26.463352 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:26.463362 | orchestrator | 2026-03-07 00:48:26.463372 | orchestrator | 2026-03-07 00:48:26.463382 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:48:26.463392 | orchestrator | Saturday 07 March 2026 00:48:24 +0000 (0:00:11.557) 0:01:57.901 ******** 2026-03-07 00:48:26.463401 | orchestrator | =============================================================================== 2026-03-07 00:48:26.463411 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 49.07s 2026-03-07 00:48:26.463426 | orchestrator | osism.services.netdata : Add repository -------------------------------- 17.79s 2026-03-07 00:48:26.463436 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.56s 2026-03-07 00:48:26.463446 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.58s 2026-03-07 00:48:26.463456 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.33s 2026-03-07 00:48:26.463465 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.38s 2026-03-07 00:48:26.463475 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.32s 2026-03-07 00:48:26.463485 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.05s 2026-03-07 00:48:26.463494 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.01s 2026-03-07 00:48:26.463504 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.83s 2026-03-07 00:48:26.463514 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.53s 2026-03-07 00:48:26.463523 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.35s 2026-03-07 00:48:26.463552 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.01s 2026-03-07 00:48:26.463562 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.82s 2026-03-07 00:48:26.463571 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.58s 2026-03-07 00:48:26.463581 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.46s 2026-03-07 00:48:26.463592 | orchestrator | 2026-03-07 00:48:26 | INFO  | Task d95e2af2-7e1b-4c29-b379-866f57a21ae8 is in state SUCCESS 2026-03-07 00:48:26.464267 | orchestrator | 2026-03-07 00:48:26 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:26.475729 | orchestrator | 2026-03-07 00:48:26 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:26.481788 | orchestrator | 2026-03-07 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:29.597049 | orchestrator | 2026-03-07 00:48:29 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:29.601672 | orchestrator | 2026-03-07 00:48:29 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:29.602593 | orchestrator | 2026-03-07 00:48:29 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:29.602638 | orchestrator | 2026-03-07 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:32.707367 | orchestrator | 2026-03-07 00:48:32 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:32.712196 | orchestrator | 2026-03-07 00:48:32 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:32.714398 | orchestrator | 2026-03-07 00:48:32 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:32.714428 | orchestrator | 2026-03-07 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:35.798627 | orchestrator | 2026-03-07 00:48:35 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:35.807423 | orchestrator | 2026-03-07 00:48:35 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:35.810451 | orchestrator | 2026-03-07 00:48:35 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:35.810572 | orchestrator | 2026-03-07 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:38.880432 | orchestrator | 2026-03-07 00:48:38 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:38.883810 | orchestrator | 2026-03-07 00:48:38 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:38.885461 | orchestrator | 2026-03-07 00:48:38 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:38.885515 | orchestrator | 2026-03-07 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:41.960622 | orchestrator | 2026-03-07 00:48:41 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:41.963647 | orchestrator | 2026-03-07 00:48:41 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:41.964278 | orchestrator | 2026-03-07 00:48:41 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:41.964729 | orchestrator | 2026-03-07 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:44.995076 | orchestrator | 2026-03-07 00:48:44 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:44.997045 | orchestrator | 2026-03-07 00:48:44 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:44.998984 | orchestrator | 2026-03-07 00:48:44 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:45.003718 | orchestrator | 2026-03-07 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:48.033603 | orchestrator | 2026-03-07 00:48:48 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:48.034237 | orchestrator | 2026-03-07 00:48:48 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:48.035924 | orchestrator | 2026-03-07 00:48:48 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:48.035960 | orchestrator | 2026-03-07 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:51.060636 | orchestrator | 2026-03-07 00:48:51 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:51.062661 | orchestrator | 2026-03-07 00:48:51 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:51.063412 | orchestrator | 2026-03-07 00:48:51 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:51.063559 | orchestrator | 2026-03-07 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:54.105362 | orchestrator | 2026-03-07 00:48:54 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:54.106603 | orchestrator | 2026-03-07 00:48:54 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:54.108182 | orchestrator | 2026-03-07 00:48:54 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:54.108236 | orchestrator | 2026-03-07 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:57.214463 | orchestrator | 2026-03-07 00:48:57 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state STARTED 2026-03-07 00:48:57.215429 | orchestrator | 2026-03-07 00:48:57 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:48:57.216433 | orchestrator | 2026-03-07 00:48:57 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:48:57.216492 | orchestrator | 2026-03-07 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:00.274127 | orchestrator | 2026-03-07 00:49:00.274179 | orchestrator | 2026-03-07 00:49:00.274185 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-07 00:49:00.274188 | orchestrator | 2026-03-07 00:49:00.274191 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-07 00:49:00.274195 | orchestrator | Saturday 07 March 2026 00:46:14 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-03-07 00:49:00.274199 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:49:00.274203 | orchestrator | 2026-03-07 00:49:00.274206 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-07 00:49:00.274209 | orchestrator | Saturday 07 March 2026 00:46:16 +0000 (0:00:01.556) 0:00:01.875 ******** 2026-03-07 00:49:00.274212 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:00.274238 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:00.274245 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:00.274271 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:00.274294 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:00.274311 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:00.274318 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:00.274324 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:00.274329 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:00.274334 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:00.274339 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:00.274345 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:00.274349 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:00.274353 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:00.274356 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:00.274359 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:00.274368 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:00.274371 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:00.274374 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:00.274378 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:00.274381 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:00.274384 | orchestrator | 2026-03-07 00:49:00.274387 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-07 00:49:00.274390 | orchestrator | Saturday 07 March 2026 00:46:21 +0000 (0:00:04.748) 0:00:06.624 ******** 2026-03-07 00:49:00.274393 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:49:00.274397 | orchestrator | 2026-03-07 00:49:00.274400 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-07 00:49:00.274405 | orchestrator | Saturday 07 March 2026 00:46:22 +0000 (0:00:01.841) 0:00:08.466 ******** 2026-03-07 00:49:00.274414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274452 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274479 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274494 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274519 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274522 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274543 | orchestrator | 2026-03-07 00:49:00.274546 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-07 00:49:00.274550 | orchestrator | Saturday 07 March 2026 00:46:28 +0000 (0:00:05.920) 0:00:14.386 ******** 2026-03-07 00:49:00.274555 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274558 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274561 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274586 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:49:00.274590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274614 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:49:00.274617 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:49:00.274625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274647 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:49:00.274650 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:49:00.274655 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:49:00.274658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274670 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:49:00.274673 | orchestrator | 2026-03-07 00:49:00.274676 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-07 00:49:00.274679 | orchestrator | Saturday 07 March 2026 00:46:30 +0000 (0:00:02.138) 0:00:16.524 ******** 2026-03-07 00:49:00.274683 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274686 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274694 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:49:00.274697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274711 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:49:00.274718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274728 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:49:00.274731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274744 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:49:00.274748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/r2026-03-07 00:49:00 | INFO  | Task e4f25901-5658-48b6-a6bb-cc18f7d150f3 is in state SUCCESS 2026-03-07 00:49:00.274757 | orchestrator | un/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274763 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:49:00.274766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274779 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:49:00.274782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:00.274786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.274792 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:49:00.274795 | orchestrator | 2026-03-07 00:49:00.274798 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-07 00:49:00.274802 | orchestrator | Saturday 07 March 2026 00:46:35 +0000 (0:00:04.192) 0:00:20.717 ******** 2026-03-07 00:49:00.274805 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:49:00.274808 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:49:00.274813 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:49:00.274816 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:49:00.274819 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:49:00.274822 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:49:00.274825 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:49:00.274828 | orchestrator | 2026-03-07 00:49:00.274832 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-07 00:49:00.274835 | orchestrator | Saturday 07 March 2026 00:46:36 +0000 (0:00:01.312) 0:00:22.029 ******** 2026-03-07 00:49:00.274838 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:49:00.274841 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:49:00.274844 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:49:00.274847 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:49:00.274850 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:49:00.274853 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:49:00.274856 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:49:00.274859 | orchestrator | 2026-03-07 00:49:00.274862 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-07 00:49:00.274865 | orchestrator | Saturday 07 March 2026 00:46:38 +0000 (0:00:02.369) 0:00:24.398 ******** 2026-03-07 00:49:00.274869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274874 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274909 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.274915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274950 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.274968 | orchestrator | 2026-03-07 00:49:00.274971 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-07 00:49:00.274974 | orchestrator | Saturday 07 March 2026 00:46:48 +0000 (0:00:09.734) 0:00:34.133 ******** 2026-03-07 00:49:00.274977 | orchestrator | [WARNING]: Skipped 2026-03-07 00:49:00.274981 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-07 00:49:00.274984 | orchestrator | to this access issue: 2026-03-07 00:49:00.274988 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-07 00:49:00.274991 | orchestrator | directory 2026-03-07 00:49:00.274994 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:49:00.274997 | orchestrator | 2026-03-07 00:49:00.275000 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-07 00:49:00.275003 | orchestrator | Saturday 07 March 2026 00:46:53 +0000 (0:00:04.573) 0:00:38.707 ******** 2026-03-07 00:49:00.275006 | orchestrator | [WARNING]: Skipped 2026-03-07 00:49:00.275012 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-07 00:49:00.275015 | orchestrator | to this access issue: 2026-03-07 00:49:00.275018 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-07 00:49:00.275021 | orchestrator | directory 2026-03-07 00:49:00.275026 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:49:00.275029 | orchestrator | 2026-03-07 00:49:00.275032 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-07 00:49:00.275036 | orchestrator | Saturday 07 March 2026 00:46:55 +0000 (0:00:02.682) 0:00:41.390 ******** 2026-03-07 00:49:00.275039 | orchestrator | [WARNING]: Skipped 2026-03-07 00:49:00.275042 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-07 00:49:00.275045 | orchestrator | to this access issue: 2026-03-07 00:49:00.275048 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-07 00:49:00.275051 | orchestrator | directory 2026-03-07 00:49:00.275054 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:49:00.275057 | orchestrator | 2026-03-07 00:49:00.275060 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-07 00:49:00.275063 | orchestrator | Saturday 07 March 2026 00:46:57 +0000 (0:00:01.371) 0:00:42.761 ******** 2026-03-07 00:49:00.275067 | orchestrator | [WARNING]: Skipped 2026-03-07 00:49:00.275070 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-07 00:49:00.275073 | orchestrator | to this access issue: 2026-03-07 00:49:00.275076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-07 00:49:00.275079 | orchestrator | directory 2026-03-07 00:49:00.275082 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:49:00.275085 | orchestrator | 2026-03-07 00:49:00.275088 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-07 00:49:00.275091 | orchestrator | Saturday 07 March 2026 00:46:58 +0000 (0:00:01.245) 0:00:44.007 ******** 2026-03-07 00:49:00.275095 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:00.275098 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:00.275101 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:00.275104 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:00.275107 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:00.275110 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:00.275113 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:00.275116 | orchestrator | 2026-03-07 00:49:00.275119 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-07 00:49:00.275122 | orchestrator | Saturday 07 March 2026 00:47:05 +0000 (0:00:06.653) 0:00:50.660 ******** 2026-03-07 00:49:00.275126 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:00.275130 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:00.275134 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:00.275137 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:00.275140 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:00.275143 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:00.275146 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:00.275149 | orchestrator | 2026-03-07 00:49:00.275152 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-07 00:49:00.275155 | orchestrator | Saturday 07 March 2026 00:47:11 +0000 (0:00:06.687) 0:00:57.348 ******** 2026-03-07 00:49:00.275158 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:00.275162 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:00.275165 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:00.275168 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:00.275171 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:00.275174 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:00.275180 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:00.275183 | orchestrator | 2026-03-07 00:49:00.275186 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-07 00:49:00.275190 | orchestrator | Saturday 07 March 2026 00:47:17 +0000 (0:00:05.668) 0:01:03.016 ******** 2026-03-07 00:49:00.275193 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.275202 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275205 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.275213 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275217 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275223 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.275227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.275233 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.275240 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275243 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275248 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275251 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.275261 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275267 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:00.275273 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275276 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275280 | orchestrator | 2026-03-07 00:49:00.275283 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-07 00:49:00.275286 | orchestrator | Saturday 07 March 2026 00:47:20 +0000 (0:00:03.454) 0:01:06.471 ******** 2026-03-07 00:49:00.275289 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:00.275294 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:00.275300 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:00.275303 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:00.275306 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:00.275309 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:00.275312 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:00.275315 | orchestrator | 2026-03-07 00:49:00.275318 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-07 00:49:00.275321 | orchestrator | Saturday 07 March 2026 00:47:24 +0000 (0:00:04.000) 0:01:10.472 ******** 2026-03-07 00:49:00.275325 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:00.275328 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:00.275331 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:00.275334 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:00.275337 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:00.275340 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:00.275343 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:00.275346 | orchestrator | 2026-03-07 00:49:00.275349 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-07 00:49:00.275352 | orchestrator | Saturday 07 March 2026 00:47:29 +0000 (0:00:04.676) 0:01:15.148 ******** 2026-03-07 00:49:00.275356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275393 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275409 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275416 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275419 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275424 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:00.275427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275456 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275467 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:00.275471 | orchestrator | 2026-03-07 00:49:00.275474 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-07 00:49:00.275477 | orchestrator | Saturday 07 March 2026 00:47:33 +0000 (0:00:03.789) 0:01:18.937 ******** 2026-03-07 00:49:00.275480 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:00.275483 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:00.275486 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:00.275489 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:00.275492 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:00.275495 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:00.275498 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:00.275501 | orchestrator | 2026-03-07 00:49:00.275505 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-07 00:49:00.275508 | orchestrator | Saturday 07 March 2026 00:47:35 +0000 (0:00:01.933) 0:01:20.871 ******** 2026-03-07 00:49:00.275511 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:00.275514 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:00.275517 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:00.275520 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:00.275523 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:00.275526 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:00.275529 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:00.275532 | orchestrator | 2026-03-07 00:49:00.275535 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:00.275538 | orchestrator | Saturday 07 March 2026 00:47:36 +0000 (0:00:01.293) 0:01:22.165 ******** 2026-03-07 00:49:00.275542 | orchestrator | 2026-03-07 00:49:00.275545 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:00.275548 | orchestrator | Saturday 07 March 2026 00:47:36 +0000 (0:00:00.073) 0:01:22.238 ******** 2026-03-07 00:49:00.275551 | orchestrator | 2026-03-07 00:49:00.275554 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:00.275557 | orchestrator | Saturday 07 March 2026 00:47:36 +0000 (0:00:00.067) 0:01:22.306 ******** 2026-03-07 00:49:00.275560 | orchestrator | 2026-03-07 00:49:00.275563 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:00.275567 | orchestrator | Saturday 07 March 2026 00:47:36 +0000 (0:00:00.257) 0:01:22.563 ******** 2026-03-07 00:49:00.275570 | orchestrator | 2026-03-07 00:49:00.275573 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:00.275576 | orchestrator | Saturday 07 March 2026 00:47:37 +0000 (0:00:00.073) 0:01:22.636 ******** 2026-03-07 00:49:00.275579 | orchestrator | 2026-03-07 00:49:00.275582 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:00.275585 | orchestrator | Saturday 07 March 2026 00:47:37 +0000 (0:00:00.066) 0:01:22.703 ******** 2026-03-07 00:49:00.275590 | orchestrator | 2026-03-07 00:49:00.275596 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:00.275599 | orchestrator | Saturday 07 March 2026 00:47:37 +0000 (0:00:00.068) 0:01:22.771 ******** 2026-03-07 00:49:00.275602 | orchestrator | 2026-03-07 00:49:00.275605 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-07 00:49:00.275608 | orchestrator | Saturday 07 March 2026 00:47:37 +0000 (0:00:00.092) 0:01:22.863 ******** 2026-03-07 00:49:00.275611 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:00.275614 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:00.275617 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:00.275620 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:00.275623 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:00.275626 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:00.275629 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:00.275633 | orchestrator | 2026-03-07 00:49:00.275638 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-07 00:49:00.275644 | orchestrator | Saturday 07 March 2026 00:48:15 +0000 (0:00:37.882) 0:02:00.746 ******** 2026-03-07 00:49:00.275652 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:00.275662 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:00.275668 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:00.275674 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:00.275679 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:00.275683 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:00.275688 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:00.275693 | orchestrator | 2026-03-07 00:49:00.275699 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-07 00:49:00.275704 | orchestrator | Saturday 07 March 2026 00:48:46 +0000 (0:00:31.298) 0:02:32.045 ******** 2026-03-07 00:49:00.275710 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:49:00.275716 | orchestrator | ok: [testbed-manager] 2026-03-07 00:49:00.275721 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:49:00.275727 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:49:00.275732 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:49:00.275738 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:49:00.275744 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:49:00.275747 | orchestrator | 2026-03-07 00:49:00.275751 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-07 00:49:00.275754 | orchestrator | Saturday 07 March 2026 00:48:48 +0000 (0:00:02.022) 0:02:34.067 ******** 2026-03-07 00:49:00.275757 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:00.275760 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:00.275763 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:00.275766 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:00.275773 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:00.275776 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:00.275779 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:00.275782 | orchestrator | 2026-03-07 00:49:00.275785 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:49:00.275789 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:00.275795 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:00.275800 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:00.275807 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:00.275814 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:00.275825 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:00.275830 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:00.275835 | orchestrator | 2026-03-07 00:49:00.275839 | orchestrator | 2026-03-07 00:49:00.275844 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:49:00.275848 | orchestrator | Saturday 07 March 2026 00:48:57 +0000 (0:00:08.846) 0:02:42.914 ******** 2026-03-07 00:49:00.275853 | orchestrator | =============================================================================== 2026-03-07 00:49:00.275858 | orchestrator | common : Restart fluentd container ------------------------------------- 37.88s 2026-03-07 00:49:00.275863 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.30s 2026-03-07 00:49:00.275868 | orchestrator | common : Copying over config.json files for services -------------------- 9.73s 2026-03-07 00:49:00.275873 | orchestrator | common : Restart cron container ----------------------------------------- 8.85s 2026-03-07 00:49:00.275879 | orchestrator | common : Copying over cron logrotate config file ------------------------ 6.69s 2026-03-07 00:49:00.275884 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 6.65s 2026-03-07 00:49:00.275890 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.92s 2026-03-07 00:49:00.275893 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 5.67s 2026-03-07 00:49:00.275896 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.75s 2026-03-07 00:49:00.275899 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 4.68s 2026-03-07 00:49:00.275902 | orchestrator | common : Find custom fluentd input config files ------------------------- 4.57s 2026-03-07 00:49:00.275905 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.19s 2026-03-07 00:49:00.275908 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.00s 2026-03-07 00:49:00.275911 | orchestrator | common : Check common containers ---------------------------------------- 3.79s 2026-03-07 00:49:00.275914 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.45s 2026-03-07 00:49:00.275917 | orchestrator | common : Find custom fluentd filter config files ------------------------ 2.68s 2026-03-07 00:49:00.275947 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.37s 2026-03-07 00:49:00.275951 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.14s 2026-03-07 00:49:00.275954 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.02s 2026-03-07 00:49:00.275957 | orchestrator | common : Creating log volume -------------------------------------------- 1.93s 2026-03-07 00:49:00.275964 | orchestrator | 2026-03-07 00:49:00 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:00.276000 | orchestrator | 2026-03-07 00:49:00 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:00.277110 | orchestrator | 2026-03-07 00:49:00 | INFO  | Task bb00c897-a3af-43e3-9197-5a04c6ee5bfb is in state STARTED 2026-03-07 00:49:00.278141 | orchestrator | 2026-03-07 00:49:00 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:00.280179 | orchestrator | 2026-03-07 00:49:00 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:00.284694 | orchestrator | 2026-03-07 00:49:00 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:00.284739 | orchestrator | 2026-03-07 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:03.312432 | orchestrator | 2026-03-07 00:49:03 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:03.313679 | orchestrator | 2026-03-07 00:49:03 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:03.316478 | orchestrator | 2026-03-07 00:49:03 | INFO  | Task bb00c897-a3af-43e3-9197-5a04c6ee5bfb is in state STARTED 2026-03-07 00:49:03.317883 | orchestrator | 2026-03-07 00:49:03 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:03.319163 | orchestrator | 2026-03-07 00:49:03 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:03.320750 | orchestrator | 2026-03-07 00:49:03 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:03.320897 | orchestrator | 2026-03-07 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:06.377667 | orchestrator | 2026-03-07 00:49:06 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:06.379314 | orchestrator | 2026-03-07 00:49:06 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:06.381579 | orchestrator | 2026-03-07 00:49:06 | INFO  | Task bb00c897-a3af-43e3-9197-5a04c6ee5bfb is in state STARTED 2026-03-07 00:49:06.383393 | orchestrator | 2026-03-07 00:49:06 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:06.385287 | orchestrator | 2026-03-07 00:49:06 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:06.387192 | orchestrator | 2026-03-07 00:49:06 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:06.387232 | orchestrator | 2026-03-07 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:09.497843 | orchestrator | 2026-03-07 00:49:09 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:09.497943 | orchestrator | 2026-03-07 00:49:09 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:09.498002 | orchestrator | 2026-03-07 00:49:09 | INFO  | Task bb00c897-a3af-43e3-9197-5a04c6ee5bfb is in state STARTED 2026-03-07 00:49:09.498010 | orchestrator | 2026-03-07 00:49:09 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:09.498057 | orchestrator | 2026-03-07 00:49:09 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:09.498065 | orchestrator | 2026-03-07 00:49:09 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:09.498072 | orchestrator | 2026-03-07 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:12.486369 | orchestrator | 2026-03-07 00:49:12 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:12.488383 | orchestrator | 2026-03-07 00:49:12 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:12.489142 | orchestrator | 2026-03-07 00:49:12 | INFO  | Task bb00c897-a3af-43e3-9197-5a04c6ee5bfb is in state STARTED 2026-03-07 00:49:12.490818 | orchestrator | 2026-03-07 00:49:12 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:12.494274 | orchestrator | 2026-03-07 00:49:12 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:12.495662 | orchestrator | 2026-03-07 00:49:12 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:12.495716 | orchestrator | 2026-03-07 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:15.532954 | orchestrator | 2026-03-07 00:49:15 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:15.534417 | orchestrator | 2026-03-07 00:49:15 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:15.535834 | orchestrator | 2026-03-07 00:49:15 | INFO  | Task bb00c897-a3af-43e3-9197-5a04c6ee5bfb is in state STARTED 2026-03-07 00:49:15.537077 | orchestrator | 2026-03-07 00:49:15 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:15.538127 | orchestrator | 2026-03-07 00:49:15 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:15.540200 | orchestrator | 2026-03-07 00:49:15 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:15.540506 | orchestrator | 2026-03-07 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:18.582285 | orchestrator | 2026-03-07 00:49:18 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:18.583070 | orchestrator | 2026-03-07 00:49:18 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:18.585405 | orchestrator | 2026-03-07 00:49:18 | INFO  | Task bb00c897-a3af-43e3-9197-5a04c6ee5bfb is in state STARTED 2026-03-07 00:49:18.587670 | orchestrator | 2026-03-07 00:49:18 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:18.591141 | orchestrator | 2026-03-07 00:49:18 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:18.591821 | orchestrator | 2026-03-07 00:49:18 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:18.591935 | orchestrator | 2026-03-07 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:21.653142 | orchestrator | 2026-03-07 00:49:21 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:21.657095 | orchestrator | 2026-03-07 00:49:21 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:21.660456 | orchestrator | 2026-03-07 00:49:21 | INFO  | Task bb00c897-a3af-43e3-9197-5a04c6ee5bfb is in state STARTED 2026-03-07 00:49:21.664898 | orchestrator | 2026-03-07 00:49:21 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:21.665986 | orchestrator | 2026-03-07 00:49:21 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:21.669833 | orchestrator | 2026-03-07 00:49:21 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:21.672471 | orchestrator | 2026-03-07 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:24.761126 | orchestrator | 2026-03-07 00:49:24 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:24.761203 | orchestrator | 2026-03-07 00:49:24 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:24.761211 | orchestrator | 2026-03-07 00:49:24 | INFO  | Task bb00c897-a3af-43e3-9197-5a04c6ee5bfb is in state SUCCESS 2026-03-07 00:49:24.761216 | orchestrator | 2026-03-07 00:49:24 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:24.761370 | orchestrator | 2026-03-07 00:49:24 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:24.762147 | orchestrator | 2026-03-07 00:49:24 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:24.787991 | orchestrator | 2026-03-07 00:49:24 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:24.788084 | orchestrator | 2026-03-07 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:27.813545 | orchestrator | 2026-03-07 00:49:27 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:27.815503 | orchestrator | 2026-03-07 00:49:27 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:27.817868 | orchestrator | 2026-03-07 00:49:27 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:27.817934 | orchestrator | 2026-03-07 00:49:27 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:27.819123 | orchestrator | 2026-03-07 00:49:27 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:27.820329 | orchestrator | 2026-03-07 00:49:27 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:27.820433 | orchestrator | 2026-03-07 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:30.908667 | orchestrator | 2026-03-07 00:49:30 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:30.910990 | orchestrator | 2026-03-07 00:49:30 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:30.914770 | orchestrator | 2026-03-07 00:49:30 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:30.918484 | orchestrator | 2026-03-07 00:49:30 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:30.921451 | orchestrator | 2026-03-07 00:49:30 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:30.924315 | orchestrator | 2026-03-07 00:49:30 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:30.925769 | orchestrator | 2026-03-07 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:34.082320 | orchestrator | 2026-03-07 00:49:34 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state STARTED 2026-03-07 00:49:34.085875 | orchestrator | 2026-03-07 00:49:34 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:34.088393 | orchestrator | 2026-03-07 00:49:34 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:34.094113 | orchestrator | 2026-03-07 00:49:34 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:34.094644 | orchestrator | 2026-03-07 00:49:34 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:34.095793 | orchestrator | 2026-03-07 00:49:34 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:34.095859 | orchestrator | 2026-03-07 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:37.165004 | orchestrator | 2026-03-07 00:49:37.165898 | orchestrator | 2026-03-07 00:49:37.165934 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:49:37.165948 | orchestrator | 2026-03-07 00:49:37.166011 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:49:37.166113 | orchestrator | Saturday 07 March 2026 00:49:02 +0000 (0:00:00.249) 0:00:00.249 ******** 2026-03-07 00:49:37.166125 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:49:37.166138 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:49:37.166149 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:49:37.166160 | orchestrator | 2026-03-07 00:49:37.166170 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:49:37.166181 | orchestrator | Saturday 07 March 2026 00:49:02 +0000 (0:00:00.483) 0:00:00.732 ******** 2026-03-07 00:49:37.166192 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-07 00:49:37.166204 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-07 00:49:37.166242 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-07 00:49:37.166254 | orchestrator | 2026-03-07 00:49:37.166265 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-07 00:49:37.166275 | orchestrator | 2026-03-07 00:49:37.166286 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-07 00:49:37.166297 | orchestrator | Saturday 07 March 2026 00:49:03 +0000 (0:00:00.563) 0:00:01.295 ******** 2026-03-07 00:49:37.166308 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:49:37.166320 | orchestrator | 2026-03-07 00:49:37.166330 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-07 00:49:37.166341 | orchestrator | Saturday 07 March 2026 00:49:03 +0000 (0:00:00.534) 0:00:01.830 ******** 2026-03-07 00:49:37.166352 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-07 00:49:37.166362 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-07 00:49:37.166373 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-07 00:49:37.166383 | orchestrator | 2026-03-07 00:49:37.166394 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-07 00:49:37.166405 | orchestrator | Saturday 07 March 2026 00:49:04 +0000 (0:00:00.875) 0:00:02.706 ******** 2026-03-07 00:49:37.166415 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-07 00:49:37.166426 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-07 00:49:37.166437 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-07 00:49:37.166447 | orchestrator | 2026-03-07 00:49:37.166458 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-07 00:49:37.166468 | orchestrator | Saturday 07 March 2026 00:49:08 +0000 (0:00:03.693) 0:00:06.399 ******** 2026-03-07 00:49:37.166479 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:37.166489 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:37.166500 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:37.166511 | orchestrator | 2026-03-07 00:49:37.166521 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-07 00:49:37.166532 | orchestrator | Saturday 07 March 2026 00:49:11 +0000 (0:00:02.863) 0:00:09.263 ******** 2026-03-07 00:49:37.166543 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:37.166554 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:37.166565 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:37.166575 | orchestrator | 2026-03-07 00:49:37.166586 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:49:37.166597 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:37.166609 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:37.166620 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:37.166631 | orchestrator | 2026-03-07 00:49:37.166641 | orchestrator | 2026-03-07 00:49:37.166652 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:49:37.166663 | orchestrator | Saturday 07 March 2026 00:49:20 +0000 (0:00:09.430) 0:00:18.694 ******** 2026-03-07 00:49:37.166673 | orchestrator | =============================================================================== 2026-03-07 00:49:37.166683 | orchestrator | memcached : Restart memcached container --------------------------------- 9.43s 2026-03-07 00:49:37.166694 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.69s 2026-03-07 00:49:37.166704 | orchestrator | memcached : Check memcached container ----------------------------------- 2.86s 2026-03-07 00:49:37.166715 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.88s 2026-03-07 00:49:37.166733 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-03-07 00:49:37.166744 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.53s 2026-03-07 00:49:37.166754 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2026-03-07 00:49:37.166765 | orchestrator | 2026-03-07 00:49:37.166776 | orchestrator | 2026-03-07 00:49:37.166787 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:49:37.166797 | orchestrator | 2026-03-07 00:49:37.166822 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:49:37.166833 | orchestrator | Saturday 07 March 2026 00:49:02 +0000 (0:00:00.304) 0:00:00.305 ******** 2026-03-07 00:49:37.166844 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:49:37.166854 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:49:37.166865 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:49:37.166876 | orchestrator | 2026-03-07 00:49:37.166886 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:49:37.166920 | orchestrator | Saturday 07 March 2026 00:49:02 +0000 (0:00:00.516) 0:00:00.821 ******** 2026-03-07 00:49:37.166931 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-07 00:49:37.166942 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-07 00:49:37.166953 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-07 00:49:37.166964 | orchestrator | 2026-03-07 00:49:37.166975 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-07 00:49:37.166986 | orchestrator | 2026-03-07 00:49:37.166996 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-07 00:49:37.167007 | orchestrator | Saturday 07 March 2026 00:49:03 +0000 (0:00:00.613) 0:00:01.435 ******** 2026-03-07 00:49:37.167018 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:49:37.167028 | orchestrator | 2026-03-07 00:49:37.167100 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-07 00:49:37.167122 | orchestrator | Saturday 07 March 2026 00:49:03 +0000 (0:00:00.419) 0:00:01.854 ******** 2026-03-07 00:49:37.167202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167316 | orchestrator | 2026-03-07 00:49:37.167327 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-07 00:49:37.167338 | orchestrator | Saturday 07 March 2026 00:49:05 +0000 (0:00:01.532) 0:00:03.386 ******** 2026-03-07 00:49:37.167350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167437 | orchestrator | 2026-03-07 00:49:37.167448 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-07 00:49:37.167459 | orchestrator | Saturday 07 March 2026 00:49:09 +0000 (0:00:04.448) 0:00:07.835 ******** 2026-03-07 00:49:37.167471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167591 | orchestrator | 2026-03-07 00:49:37.167658 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-07 00:49:37.167669 | orchestrator | Saturday 07 March 2026 00:49:13 +0000 (0:00:04.032) 0:00:11.867 ******** 2026-03-07 00:49:37.167681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:37.167763 | orchestrator | 2026-03-07 00:49:37.167774 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-07 00:49:37.167785 | orchestrator | Saturday 07 March 2026 00:49:16 +0000 (0:00:02.459) 0:00:14.327 ******** 2026-03-07 00:49:37.167796 | orchestrator | 2026-03-07 00:49:37.167807 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-07 00:49:37.167824 | orchestrator | Saturday 07 March 2026 00:49:16 +0000 (0:00:00.076) 0:00:14.404 ******** 2026-03-07 00:49:37.167836 | orchestrator | 2026-03-07 00:49:37.167846 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-07 00:49:37.167861 | orchestrator | Saturday 07 March 2026 00:49:16 +0000 (0:00:00.090) 0:00:14.494 ******** 2026-03-07 00:49:37.167873 | orchestrator | 2026-03-07 00:49:37.167884 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-07 00:49:37.167895 | orchestrator | Saturday 07 March 2026 00:49:16 +0000 (0:00:00.095) 0:00:14.590 ******** 2026-03-07 00:49:37.167906 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:37.167917 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:37.167928 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:37.167938 | orchestrator | 2026-03-07 00:49:37.167949 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-07 00:49:37.167960 | orchestrator | Saturday 07 March 2026 00:49:26 +0000 (0:00:10.038) 0:00:24.628 ******** 2026-03-07 00:49:37.167971 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:37.167982 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:37.167992 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:37.168003 | orchestrator | 2026-03-07 00:49:37.168014 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:49:37.168025 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:37.168072 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:37.168085 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:37.168096 | orchestrator | 2026-03-07 00:49:37.168107 | orchestrator | 2026-03-07 00:49:37.168118 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:49:37.168129 | orchestrator | Saturday 07 March 2026 00:49:32 +0000 (0:00:05.832) 0:00:30.461 ******** 2026-03-07 00:49:37.168140 | orchestrator | =============================================================================== 2026-03-07 00:49:37.168150 | orchestrator | redis : Restart redis container ---------------------------------------- 10.04s 2026-03-07 00:49:37.168161 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.83s 2026-03-07 00:49:37.168172 | orchestrator | redis : Copying over default config.json files -------------------------- 4.44s 2026-03-07 00:49:37.168183 | orchestrator | redis : Copying over redis config files --------------------------------- 4.03s 2026-03-07 00:49:37.168194 | orchestrator | redis : Check redis containers ------------------------------------------ 2.46s 2026-03-07 00:49:37.168205 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.54s 2026-03-07 00:49:37.168216 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-03-07 00:49:37.168226 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2026-03-07 00:49:37.168238 | orchestrator | redis : include_tasks --------------------------------------------------- 0.42s 2026-03-07 00:49:37.168249 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.26s 2026-03-07 00:49:37.168260 | orchestrator | 2026-03-07 00:49:37 | INFO  | Task da3f6668-bb55-453e-9882-98f52b172764 is in state SUCCESS 2026-03-07 00:49:37.168272 | orchestrator | 2026-03-07 00:49:37 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:37.168390 | orchestrator | 2026-03-07 00:49:37 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:37.168582 | orchestrator | 2026-03-07 00:49:37 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:37.169482 | orchestrator | 2026-03-07 00:49:37 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:37.171827 | orchestrator | 2026-03-07 00:49:37 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:37.171873 | orchestrator | 2026-03-07 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:40.251547 | orchestrator | 2026-03-07 00:49:40 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:40.251667 | orchestrator | 2026-03-07 00:49:40 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:40.252681 | orchestrator | 2026-03-07 00:49:40 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:40.255393 | orchestrator | 2026-03-07 00:49:40 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:40.256310 | orchestrator | 2026-03-07 00:49:40 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:40.256418 | orchestrator | 2026-03-07 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:43.303166 | orchestrator | 2026-03-07 00:49:43 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:43.306377 | orchestrator | 2026-03-07 00:49:43 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:43.307581 | orchestrator | 2026-03-07 00:49:43 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:43.310522 | orchestrator | 2026-03-07 00:49:43 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:43.314674 | orchestrator | 2026-03-07 00:49:43 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:43.314765 | orchestrator | 2026-03-07 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:46.347944 | orchestrator | 2026-03-07 00:49:46 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:46.348435 | orchestrator | 2026-03-07 00:49:46 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:46.349312 | orchestrator | 2026-03-07 00:49:46 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:46.350493 | orchestrator | 2026-03-07 00:49:46 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:46.351602 | orchestrator | 2026-03-07 00:49:46 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:46.351658 | orchestrator | 2026-03-07 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:49.380429 | orchestrator | 2026-03-07 00:49:49 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:49.382477 | orchestrator | 2026-03-07 00:49:49 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:49.383409 | orchestrator | 2026-03-07 00:49:49 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:49.384134 | orchestrator | 2026-03-07 00:49:49 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:49.385143 | orchestrator | 2026-03-07 00:49:49 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:49.385929 | orchestrator | 2026-03-07 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:52.453927 | orchestrator | 2026-03-07 00:49:52 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:52.456814 | orchestrator | 2026-03-07 00:49:52 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:52.457419 | orchestrator | 2026-03-07 00:49:52 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:52.461650 | orchestrator | 2026-03-07 00:49:52 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:52.462948 | orchestrator | 2026-03-07 00:49:52 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:52.463026 | orchestrator | 2026-03-07 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:55.532132 | orchestrator | 2026-03-07 00:49:55 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:55.532325 | orchestrator | 2026-03-07 00:49:55 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:55.534195 | orchestrator | 2026-03-07 00:49:55 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:55.534944 | orchestrator | 2026-03-07 00:49:55 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:55.536236 | orchestrator | 2026-03-07 00:49:55 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:55.536265 | orchestrator | 2026-03-07 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:58.629515 | orchestrator | 2026-03-07 00:49:58 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:49:58.637631 | orchestrator | 2026-03-07 00:49:58 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:49:58.640754 | orchestrator | 2026-03-07 00:49:58 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:49:58.645304 | orchestrator | 2026-03-07 00:49:58 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:49:58.645925 | orchestrator | 2026-03-07 00:49:58 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:49:58.645952 | orchestrator | 2026-03-07 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:01.701047 | orchestrator | 2026-03-07 00:50:01 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:50:01.701444 | orchestrator | 2026-03-07 00:50:01 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:01.702126 | orchestrator | 2026-03-07 00:50:01 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:01.702702 | orchestrator | 2026-03-07 00:50:01 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:01.703404 | orchestrator | 2026-03-07 00:50:01 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:01.703421 | orchestrator | 2026-03-07 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:04.747325 | orchestrator | 2026-03-07 00:50:04 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:50:04.748306 | orchestrator | 2026-03-07 00:50:04 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:04.752413 | orchestrator | 2026-03-07 00:50:04 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:04.753497 | orchestrator | 2026-03-07 00:50:04 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:04.757612 | orchestrator | 2026-03-07 00:50:04 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:04.757685 | orchestrator | 2026-03-07 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:07.788757 | orchestrator | 2026-03-07 00:50:07 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:50:07.788943 | orchestrator | 2026-03-07 00:50:07 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:07.791568 | orchestrator | 2026-03-07 00:50:07 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:07.793303 | orchestrator | 2026-03-07 00:50:07 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:07.793366 | orchestrator | 2026-03-07 00:50:07 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:07.793378 | orchestrator | 2026-03-07 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:10.838422 | orchestrator | 2026-03-07 00:50:10 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:50:10.839924 | orchestrator | 2026-03-07 00:50:10 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:10.841612 | orchestrator | 2026-03-07 00:50:10 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:10.845243 | orchestrator | 2026-03-07 00:50:10 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:10.849028 | orchestrator | 2026-03-07 00:50:10 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:10.849115 | orchestrator | 2026-03-07 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:13.906668 | orchestrator | 2026-03-07 00:50:13 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:50:13.907621 | orchestrator | 2026-03-07 00:50:13 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:13.909202 | orchestrator | 2026-03-07 00:50:13 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:13.910896 | orchestrator | 2026-03-07 00:50:13 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:13.912467 | orchestrator | 2026-03-07 00:50:13 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:13.912627 | orchestrator | 2026-03-07 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:16.959631 | orchestrator | 2026-03-07 00:50:16 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:50:16.961085 | orchestrator | 2026-03-07 00:50:16 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:16.964364 | orchestrator | 2026-03-07 00:50:16 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:16.966956 | orchestrator | 2026-03-07 00:50:16 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:16.968641 | orchestrator | 2026-03-07 00:50:16 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:16.968742 | orchestrator | 2026-03-07 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:20.051926 | orchestrator | 2026-03-07 00:50:20 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:50:20.053739 | orchestrator | 2026-03-07 00:50:20 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:20.059701 | orchestrator | 2026-03-07 00:50:20 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:20.062670 | orchestrator | 2026-03-07 00:50:20 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:20.067092 | orchestrator | 2026-03-07 00:50:20 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:20.067520 | orchestrator | 2026-03-07 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:23.124721 | orchestrator | 2026-03-07 00:50:23 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state STARTED 2026-03-07 00:50:23.127701 | orchestrator | 2026-03-07 00:50:23 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:23.131411 | orchestrator | 2026-03-07 00:50:23 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:23.135212 | orchestrator | 2026-03-07 00:50:23 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:23.137227 | orchestrator | 2026-03-07 00:50:23 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:23.137586 | orchestrator | 2026-03-07 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:26.190890 | orchestrator | 2026-03-07 00:50:26.191114 | orchestrator | 2026-03-07 00:50:26 | INFO  | Task bed1bdb5-c57e-486d-9209-a3be88068ef7 is in state SUCCESS 2026-03-07 00:50:26.191993 | orchestrator | 2026-03-07 00:50:26.192048 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:50:26.192070 | orchestrator | 2026-03-07 00:50:26.192090 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:50:26.192139 | orchestrator | Saturday 07 March 2026 00:49:01 +0000 (0:00:00.251) 0:00:00.251 ******** 2026-03-07 00:50:26.192152 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:50:26.192165 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:50:26.192204 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:50:26.192224 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:50:26.192237 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:50:26.192248 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:50:26.192259 | orchestrator | 2026-03-07 00:50:26.192270 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:50:26.192281 | orchestrator | Saturday 07 March 2026 00:49:02 +0000 (0:00:00.774) 0:00:01.026 ******** 2026-03-07 00:50:26.192292 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:26.192303 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:26.192314 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:26.192325 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:26.192336 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:26.192347 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:26.192357 | orchestrator | 2026-03-07 00:50:26.192368 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-07 00:50:26.192379 | orchestrator | 2026-03-07 00:50:26.192390 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-07 00:50:26.192400 | orchestrator | Saturday 07 March 2026 00:49:03 +0000 (0:00:00.616) 0:00:01.642 ******** 2026-03-07 00:50:26.192412 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:50:26.192425 | orchestrator | 2026-03-07 00:50:26.192435 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-07 00:50:26.192446 | orchestrator | Saturday 07 March 2026 00:49:04 +0000 (0:00:01.488) 0:00:03.131 ******** 2026-03-07 00:50:26.192457 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-07 00:50:26.192468 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-07 00:50:26.192479 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-07 00:50:26.192490 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-07 00:50:26.192501 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-07 00:50:26.192511 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-07 00:50:26.192522 | orchestrator | 2026-03-07 00:50:26.192533 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-07 00:50:26.192544 | orchestrator | Saturday 07 March 2026 00:49:07 +0000 (0:00:02.998) 0:00:06.129 ******** 2026-03-07 00:50:26.192555 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-07 00:50:26.192566 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-07 00:50:26.192594 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-07 00:50:26.192606 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-07 00:50:26.192618 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-07 00:50:26.192630 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-07 00:50:26.192643 | orchestrator | 2026-03-07 00:50:26.192655 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-07 00:50:26.192667 | orchestrator | Saturday 07 March 2026 00:49:09 +0000 (0:00:02.149) 0:00:08.279 ******** 2026-03-07 00:50:26.192679 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-07 00:50:26.192693 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:50:26.192712 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-07 00:50:26.192744 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:50:26.192761 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-07 00:50:26.192773 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:50:26.192829 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-07 00:50:26.192842 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:50:26.192852 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-07 00:50:26.192863 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:50:26.192874 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-07 00:50:26.192885 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:50:26.192895 | orchestrator | 2026-03-07 00:50:26.192906 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-07 00:50:26.192917 | orchestrator | Saturday 07 March 2026 00:49:12 +0000 (0:00:03.016) 0:00:11.295 ******** 2026-03-07 00:50:26.192928 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:50:26.192938 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:50:26.192949 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:50:26.192960 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:50:26.192970 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:50:26.192981 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:50:26.192992 | orchestrator | 2026-03-07 00:50:26.193004 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-07 00:50:26.193014 | orchestrator | Saturday 07 March 2026 00:49:14 +0000 (0:00:01.109) 0:00:12.404 ******** 2026-03-07 00:50:26.193050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193118 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193171 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193235 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193444 | orchestrator | 2026-03-07 00:50:26.193463 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-07 00:50:26.193483 | orchestrator | Saturday 07 March 2026 00:49:16 +0000 (0:00:02.731) 0:00:15.135 ******** 2026-03-07 00:50:26.193503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193628 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193715 | orchestrator | 2026-03-07 00:50:26.193726 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-07 00:50:26.193737 | orchestrator | Saturday 07 March 2026 00:49:20 +0000 (0:00:04.180) 0:00:19.316 ******** 2026-03-07 00:50:26.193748 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:50:26.193759 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:50:26.193769 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:50:26.193780 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:50:26.193790 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:50:26.193801 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:50:26.193811 | orchestrator | 2026-03-07 00:50:26.193822 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-07 00:50:26.193833 | orchestrator | Saturday 07 March 2026 00:49:22 +0000 (0:00:01.779) 0:00:21.095 ******** 2026-03-07 00:50:26.193844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.193999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:26.194011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.194092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.194114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.194126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:26.194144 | orchestrator | 2026-03-07 00:50:26.194156 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:26.194167 | orchestrator | Saturday 07 March 2026 00:49:26 +0000 (0:00:03.710) 0:00:24.806 ******** 2026-03-07 00:50:26.194250 | orchestrator | 2026-03-07 00:50:26.194264 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:26.194276 | orchestrator | Saturday 07 March 2026 00:49:26 +0000 (0:00:00.357) 0:00:25.164 ******** 2026-03-07 00:50:26.194287 | orchestrator | 2026-03-07 00:50:26.194298 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:26.194309 | orchestrator | Saturday 07 March 2026 00:49:27 +0000 (0:00:00.483) 0:00:25.647 ******** 2026-03-07 00:50:26.194319 | orchestrator | 2026-03-07 00:50:26.194330 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:26.194341 | orchestrator | Saturday 07 March 2026 00:49:27 +0000 (0:00:00.185) 0:00:25.833 ******** 2026-03-07 00:50:26.194351 | orchestrator | 2026-03-07 00:50:26.194362 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:26.194373 | orchestrator | Saturday 07 March 2026 00:49:27 +0000 (0:00:00.212) 0:00:26.046 ******** 2026-03-07 00:50:26.194383 | orchestrator | 2026-03-07 00:50:26.194393 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:26.194402 | orchestrator | Saturday 07 March 2026 00:49:27 +0000 (0:00:00.226) 0:00:26.272 ******** 2026-03-07 00:50:26.194412 | orchestrator | 2026-03-07 00:50:26.194422 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-07 00:50:26.194431 | orchestrator | Saturday 07 March 2026 00:49:28 +0000 (0:00:00.200) 0:00:26.473 ******** 2026-03-07 00:50:26.194441 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:50:26.194451 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:50:26.194460 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:50:26.194470 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:50:26.194479 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:50:26.194489 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:50:26.194498 | orchestrator | 2026-03-07 00:50:26.194509 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-07 00:50:26.194519 | orchestrator | Saturday 07 March 2026 00:49:42 +0000 (0:00:14.153) 0:00:40.626 ******** 2026-03-07 00:50:26.194528 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:50:26.194539 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:50:26.194548 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:50:26.194558 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:50:26.194567 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:50:26.194577 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:50:26.194587 | orchestrator | 2026-03-07 00:50:26.194596 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-07 00:50:26.194606 | orchestrator | Saturday 07 March 2026 00:49:44 +0000 (0:00:02.263) 0:00:42.890 ******** 2026-03-07 00:50:26.194615 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:50:26.194625 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:50:26.194637 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:50:26.194654 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:50:26.194668 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:50:26.194686 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:50:26.194700 | orchestrator | 2026-03-07 00:50:26.194715 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-07 00:50:26.194730 | orchestrator | Saturday 07 March 2026 00:49:54 +0000 (0:00:10.370) 0:00:53.261 ******** 2026-03-07 00:50:26.194746 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-07 00:50:26.194777 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-07 00:50:26.194791 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-07 00:50:26.194807 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-07 00:50:26.194823 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-07 00:50:26.194849 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-07 00:50:26.194867 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-07 00:50:26.194885 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-07 00:50:26.194901 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-07 00:50:26.194918 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-07 00:50:26.194928 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-07 00:50:26.194938 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-07 00:50:26.194948 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:26.194957 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:26.194967 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:26.194976 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:26.194986 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:26.194995 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:26.195005 | orchestrator | 2026-03-07 00:50:26.195015 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-07 00:50:26.195025 | orchestrator | Saturday 07 March 2026 00:50:03 +0000 (0:00:08.862) 0:01:02.123 ******** 2026-03-07 00:50:26.195034 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-07 00:50:26.195044 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:50:26.195054 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-07 00:50:26.195063 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-07 00:50:26.195073 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:50:26.195083 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-07 00:50:26.195092 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:50:26.195102 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-07 00:50:26.195111 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-07 00:50:26.195121 | orchestrator | 2026-03-07 00:50:26.195130 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-07 00:50:26.195150 | orchestrator | Saturday 07 March 2026 00:50:07 +0000 (0:00:03.998) 0:01:06.121 ******** 2026-03-07 00:50:26.195164 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-07 00:50:26.195228 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:50:26.195243 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-07 00:50:26.195261 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:50:26.195271 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-07 00:50:26.195280 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:50:26.195290 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-07 00:50:26.195299 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-07 00:50:26.195309 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-07 00:50:26.195318 | orchestrator | 2026-03-07 00:50:26.195328 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-07 00:50:26.195338 | orchestrator | Saturday 07 March 2026 00:50:12 +0000 (0:00:04.708) 0:01:10.830 ******** 2026-03-07 00:50:26.195347 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:50:26.195356 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:50:26.195366 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:50:26.195375 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:50:26.195385 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:50:26.195394 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:50:26.195404 | orchestrator | 2026-03-07 00:50:26.195414 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:50:26.195423 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:50:26.195432 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:50:26.195440 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:50:26.195448 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 00:50:26.195456 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 00:50:26.195470 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 00:50:26.195478 | orchestrator | 2026-03-07 00:50:26.195486 | orchestrator | 2026-03-07 00:50:26.195494 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:50:26.195502 | orchestrator | Saturday 07 March 2026 00:50:23 +0000 (0:00:10.658) 0:01:21.489 ******** 2026-03-07 00:50:26.195510 | orchestrator | =============================================================================== 2026-03-07 00:50:26.195518 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 21.03s 2026-03-07 00:50:26.195525 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 14.15s 2026-03-07 00:50:26.195534 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.86s 2026-03-07 00:50:26.195542 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.71s 2026-03-07 00:50:26.195550 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.18s 2026-03-07 00:50:26.195558 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 4.00s 2026-03-07 00:50:26.195565 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.71s 2026-03-07 00:50:26.195573 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.02s 2026-03-07 00:50:26.195581 | orchestrator | module-load : Load modules ---------------------------------------------- 3.00s 2026-03-07 00:50:26.195589 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.73s 2026-03-07 00:50:26.195597 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.26s 2026-03-07 00:50:26.195604 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.15s 2026-03-07 00:50:26.195617 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.78s 2026-03-07 00:50:26.195625 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.67s 2026-03-07 00:50:26.195633 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.49s 2026-03-07 00:50:26.195641 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.11s 2026-03-07 00:50:26.195648 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2026-03-07 00:50:26.195656 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-03-07 00:50:26.195664 | orchestrator | 2026-03-07 00:50:26 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:26.195672 | orchestrator | 2026-03-07 00:50:26 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:50:26.199684 | orchestrator | 2026-03-07 00:50:26 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:26.204760 | orchestrator | 2026-03-07 00:50:26 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:26.204873 | orchestrator | 2026-03-07 00:50:26 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:26.204894 | orchestrator | 2026-03-07 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:29.262270 | orchestrator | 2026-03-07 00:50:29 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:29.263777 | orchestrator | 2026-03-07 00:50:29 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:50:29.266403 | orchestrator | 2026-03-07 00:50:29 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:29.267992 | orchestrator | 2026-03-07 00:50:29 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:29.269888 | orchestrator | 2026-03-07 00:50:29 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:29.269996 | orchestrator | 2026-03-07 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:32.306128 | orchestrator | 2026-03-07 00:50:32 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:32.310386 | orchestrator | 2026-03-07 00:50:32 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:50:32.314274 | orchestrator | 2026-03-07 00:50:32 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:32.317975 | orchestrator | 2026-03-07 00:50:32 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:32.320468 | orchestrator | 2026-03-07 00:50:32 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:32.321397 | orchestrator | 2026-03-07 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:35.361164 | orchestrator | 2026-03-07 00:50:35 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:35.404246 | orchestrator | 2026-03-07 00:50:35 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:50:35.404322 | orchestrator | 2026-03-07 00:50:35 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:35.404329 | orchestrator | 2026-03-07 00:50:35 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:35.404333 | orchestrator | 2026-03-07 00:50:35 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:35.404339 | orchestrator | 2026-03-07 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:38.426663 | orchestrator | 2026-03-07 00:50:38 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:38.426924 | orchestrator | 2026-03-07 00:50:38 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:50:38.428388 | orchestrator | 2026-03-07 00:50:38 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:38.430151 | orchestrator | 2026-03-07 00:50:38 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:38.432358 | orchestrator | 2026-03-07 00:50:38 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:38.432403 | orchestrator | 2026-03-07 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:41.500828 | orchestrator | 2026-03-07 00:50:41 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:41.501687 | orchestrator | 2026-03-07 00:50:41 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:50:41.502772 | orchestrator | 2026-03-07 00:50:41 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:41.504050 | orchestrator | 2026-03-07 00:50:41 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:41.505069 | orchestrator | 2026-03-07 00:50:41 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:41.505106 | orchestrator | 2026-03-07 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:44.554539 | orchestrator | 2026-03-07 00:50:44 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:44.554737 | orchestrator | 2026-03-07 00:50:44 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:50:44.557717 | orchestrator | 2026-03-07 00:50:44 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:44.558518 | orchestrator | 2026-03-07 00:50:44 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:44.562928 | orchestrator | 2026-03-07 00:50:44 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:44.563001 | orchestrator | 2026-03-07 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:47.660750 | orchestrator | 2026-03-07 00:50:47 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:47.665718 | orchestrator | 2026-03-07 00:50:47 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:50:47.667812 | orchestrator | 2026-03-07 00:50:47 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:47.669773 | orchestrator | 2026-03-07 00:50:47 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:47.674659 | orchestrator | 2026-03-07 00:50:47 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:47.674725 | orchestrator | 2026-03-07 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:50.741987 | orchestrator | 2026-03-07 00:50:50 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:50.742092 | orchestrator | 2026-03-07 00:50:50 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:50:50.742098 | orchestrator | 2026-03-07 00:50:50 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:50.742102 | orchestrator | 2026-03-07 00:50:50 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:50.742129 | orchestrator | 2026-03-07 00:50:50 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:50.742134 | orchestrator | 2026-03-07 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:54.189605 | orchestrator | 2026-03-07 00:50:54 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:54.189735 | orchestrator | 2026-03-07 00:50:54 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:50:54.189749 | orchestrator | 2026-03-07 00:50:54 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:54.189759 | orchestrator | 2026-03-07 00:50:54 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:54.189768 | orchestrator | 2026-03-07 00:50:54 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:54.189777 | orchestrator | 2026-03-07 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:57.201057 | orchestrator | 2026-03-07 00:50:57 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:50:57.201135 | orchestrator | 2026-03-07 00:50:57 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:50:57.201144 | orchestrator | 2026-03-07 00:50:57 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:50:57.201151 | orchestrator | 2026-03-07 00:50:57 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:50:57.201157 | orchestrator | 2026-03-07 00:50:57 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:50:57.201164 | orchestrator | 2026-03-07 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:00.252012 | orchestrator | 2026-03-07 00:51:00 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:00.254850 | orchestrator | 2026-03-07 00:51:00 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:00.256774 | orchestrator | 2026-03-07 00:51:00 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:51:00.258925 | orchestrator | 2026-03-07 00:51:00 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:00.260981 | orchestrator | 2026-03-07 00:51:00 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:00.261039 | orchestrator | 2026-03-07 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:03.332065 | orchestrator | 2026-03-07 00:51:03 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:03.333358 | orchestrator | 2026-03-07 00:51:03 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:03.335174 | orchestrator | 2026-03-07 00:51:03 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:51:03.336970 | orchestrator | 2026-03-07 00:51:03 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:03.338891 | orchestrator | 2026-03-07 00:51:03 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:03.339207 | orchestrator | 2026-03-07 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:06.464098 | orchestrator | 2026-03-07 00:51:06 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:06.465893 | orchestrator | 2026-03-07 00:51:06 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:06.467417 | orchestrator | 2026-03-07 00:51:06 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:51:06.471457 | orchestrator | 2026-03-07 00:51:06 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:06.473153 | orchestrator | 2026-03-07 00:51:06 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:06.473334 | orchestrator | 2026-03-07 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:09.663794 | orchestrator | 2026-03-07 00:51:09 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:09.663897 | orchestrator | 2026-03-07 00:51:09 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:09.663913 | orchestrator | 2026-03-07 00:51:09 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:51:09.663925 | orchestrator | 2026-03-07 00:51:09 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:09.663934 | orchestrator | 2026-03-07 00:51:09 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:09.663943 | orchestrator | 2026-03-07 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:12.802926 | orchestrator | 2026-03-07 00:51:12 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:12.803118 | orchestrator | 2026-03-07 00:51:12 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:12.806749 | orchestrator | 2026-03-07 00:51:12 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:51:12.810894 | orchestrator | 2026-03-07 00:51:12 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:12.812228 | orchestrator | 2026-03-07 00:51:12 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:12.812277 | orchestrator | 2026-03-07 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:16.595470 | orchestrator | 2026-03-07 00:51:16 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:16.597585 | orchestrator | 2026-03-07 00:51:16 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:16.600806 | orchestrator | 2026-03-07 00:51:16 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:51:16.600872 | orchestrator | 2026-03-07 00:51:16 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:16.600880 | orchestrator | 2026-03-07 00:51:16 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:16.600887 | orchestrator | 2026-03-07 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:19.657073 | orchestrator | 2026-03-07 00:51:19 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:19.658354 | orchestrator | 2026-03-07 00:51:19 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:19.660957 | orchestrator | 2026-03-07 00:51:19 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:51:19.662238 | orchestrator | 2026-03-07 00:51:19 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:19.664917 | orchestrator | 2026-03-07 00:51:19 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:19.665182 | orchestrator | 2026-03-07 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:22.703658 | orchestrator | 2026-03-07 00:51:22 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:22.706393 | orchestrator | 2026-03-07 00:51:22 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:22.707481 | orchestrator | 2026-03-07 00:51:22 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:51:22.708601 | orchestrator | 2026-03-07 00:51:22 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:22.709625 | orchestrator | 2026-03-07 00:51:22 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:22.709700 | orchestrator | 2026-03-07 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:25.809701 | orchestrator | 2026-03-07 00:51:25 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:25.812397 | orchestrator | 2026-03-07 00:51:25 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:25.821506 | orchestrator | 2026-03-07 00:51:25 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:51:25.825341 | orchestrator | 2026-03-07 00:51:25 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:25.832935 | orchestrator | 2026-03-07 00:51:25 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:25.833890 | orchestrator | 2026-03-07 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:29.036612 | orchestrator | 2026-03-07 00:51:29 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:29.037527 | orchestrator | 2026-03-07 00:51:29 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:29.048201 | orchestrator | 2026-03-07 00:51:29 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state STARTED 2026-03-07 00:51:29.049670 | orchestrator | 2026-03-07 00:51:29 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:29.052242 | orchestrator | 2026-03-07 00:51:29 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:29.052294 | orchestrator | 2026-03-07 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:32.109228 | orchestrator | 2026-03-07 00:51:32 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:32.109599 | orchestrator | 2026-03-07 00:51:32 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:32.112257 | orchestrator | 2026-03-07 00:51:32 | INFO  | Task 5b234dec-d220-433a-944e-90111a842742 is in state SUCCESS 2026-03-07 00:51:32.114426 | orchestrator | 2026-03-07 00:51:32.114481 | orchestrator | 2026-03-07 00:51:32.114490 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-07 00:51:32.114498 | orchestrator | 2026-03-07 00:51:32.114505 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-07 00:51:32.114512 | orchestrator | Saturday 07 March 2026 00:46:15 +0000 (0:00:00.211) 0:00:00.211 ******** 2026-03-07 00:51:32.114519 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:32.114527 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:32.114534 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:32.114540 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.114546 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.114555 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.114566 | orchestrator | 2026-03-07 00:51:32.114576 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-07 00:51:32.114586 | orchestrator | Saturday 07 March 2026 00:46:16 +0000 (0:00:00.873) 0:00:01.085 ******** 2026-03-07 00:51:32.114596 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.114627 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.114638 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.114647 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.114657 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.114666 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.114676 | orchestrator | 2026-03-07 00:51:32.114686 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-07 00:51:32.114696 | orchestrator | Saturday 07 March 2026 00:46:17 +0000 (0:00:00.786) 0:00:01.871 ******** 2026-03-07 00:51:32.114708 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.114718 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.114728 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.114739 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.114749 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.114759 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.114769 | orchestrator | 2026-03-07 00:51:32.114780 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-07 00:51:32.114790 | orchestrator | Saturday 07 March 2026 00:46:17 +0000 (0:00:00.913) 0:00:02.785 ******** 2026-03-07 00:51:32.114799 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:32.114806 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.114812 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:32.114818 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.114824 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:32.114830 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.114836 | orchestrator | 2026-03-07 00:51:32.114843 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-07 00:51:32.114849 | orchestrator | Saturday 07 March 2026 00:46:20 +0000 (0:00:02.605) 0:00:05.390 ******** 2026-03-07 00:51:32.114859 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:32.114869 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:32.114880 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:32.114890 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.114909 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.114919 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.114928 | orchestrator | 2026-03-07 00:51:32.114937 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-07 00:51:32.114946 | orchestrator | Saturday 07 March 2026 00:46:22 +0000 (0:00:01.664) 0:00:07.055 ******** 2026-03-07 00:51:32.114955 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:32.114964 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:32.114975 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:32.114981 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.114987 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.114994 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.115000 | orchestrator | 2026-03-07 00:51:32.115006 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-07 00:51:32.115013 | orchestrator | Saturday 07 March 2026 00:46:23 +0000 (0:00:01.700) 0:00:08.755 ******** 2026-03-07 00:51:32.115020 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.115025 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.115030 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.115036 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.115044 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.115053 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.115064 | orchestrator | 2026-03-07 00:51:32.115077 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-07 00:51:32.115085 | orchestrator | Saturday 07 March 2026 00:46:24 +0000 (0:00:01.059) 0:00:09.815 ******** 2026-03-07 00:51:32.115094 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.115102 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.115111 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.115119 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.115135 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.115143 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.115151 | orchestrator | 2026-03-07 00:51:32.115160 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-07 00:51:32.115169 | orchestrator | Saturday 07 March 2026 00:46:26 +0000 (0:00:01.095) 0:00:10.910 ******** 2026-03-07 00:51:32.115177 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:32.115185 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:32.115194 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.115203 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:32.115211 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:32.115220 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.115229 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:32.115238 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:32.115246 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.115256 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:32.115274 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:32.115280 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.115285 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:32.115291 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:32.115296 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.115305 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:32.115314 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:32.115322 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.115352 | orchestrator | 2026-03-07 00:51:32.115362 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-07 00:51:32.115370 | orchestrator | Saturday 07 March 2026 00:46:27 +0000 (0:00:01.124) 0:00:12.035 ******** 2026-03-07 00:51:32.115379 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.115388 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.115398 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.115407 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.115416 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.115425 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.115433 | orchestrator | 2026-03-07 00:51:32.115443 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-07 00:51:32.115450 | orchestrator | Saturday 07 March 2026 00:46:28 +0000 (0:00:01.650) 0:00:13.685 ******** 2026-03-07 00:51:32.115456 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:32.115461 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:32.115467 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:32.115472 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.115477 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.115483 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.115488 | orchestrator | 2026-03-07 00:51:32.115493 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-07 00:51:32.115499 | orchestrator | Saturday 07 March 2026 00:46:30 +0000 (0:00:01.177) 0:00:14.862 ******** 2026-03-07 00:51:32.115504 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:32.115509 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:32.115515 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.115520 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:32.115525 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.115531 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.115542 | orchestrator | 2026-03-07 00:51:32.115548 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-07 00:51:32.115553 | orchestrator | Saturday 07 March 2026 00:46:35 +0000 (0:00:05.817) 0:00:20.680 ******** 2026-03-07 00:51:32.115558 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.115563 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.115569 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.115579 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.115585 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.115592 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.115601 | orchestrator | 2026-03-07 00:51:32.115616 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-07 00:51:32.115625 | orchestrator | Saturday 07 March 2026 00:46:38 +0000 (0:00:02.990) 0:00:23.670 ******** 2026-03-07 00:51:32.115633 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.115642 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.115650 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.115658 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.115667 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.115675 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.115684 | orchestrator | 2026-03-07 00:51:32.115692 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-07 00:51:32.115703 | orchestrator | Saturday 07 March 2026 00:46:42 +0000 (0:00:03.354) 0:00:27.025 ******** 2026-03-07 00:51:32.115712 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.115721 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.115729 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.115737 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.115746 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.115754 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.115763 | orchestrator | 2026-03-07 00:51:32.115773 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-07 00:51:32.115782 | orchestrator | Saturday 07 March 2026 00:46:43 +0000 (0:00:01.455) 0:00:28.480 ******** 2026-03-07 00:51:32.115792 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-07 00:51:32.115800 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-07 00:51:32.115810 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.115816 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-07 00:51:32.115821 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-07 00:51:32.115827 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.115832 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-07 00:51:32.115837 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-07 00:51:32.115843 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.115848 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-07 00:51:32.115853 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-07 00:51:32.115859 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.115864 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-07 00:51:32.115869 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-07 00:51:32.115875 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.115880 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-07 00:51:32.115886 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-07 00:51:32.115891 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.115896 | orchestrator | 2026-03-07 00:51:32.115902 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-07 00:51:32.115915 | orchestrator | Saturday 07 March 2026 00:46:45 +0000 (0:00:01.859) 0:00:30.340 ******** 2026-03-07 00:51:32.115920 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.115932 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.115938 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.115943 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.115948 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.115953 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.115959 | orchestrator | 2026-03-07 00:51:32.115964 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-07 00:51:32.115970 | orchestrator | Saturday 07 March 2026 00:46:46 +0000 (0:00:01.389) 0:00:31.730 ******** 2026-03-07 00:51:32.115975 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.115980 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.115986 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.115991 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.115996 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.116001 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.116007 | orchestrator | 2026-03-07 00:51:32.116012 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-07 00:51:32.116018 | orchestrator | 2026-03-07 00:51:32.116023 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-07 00:51:32.116029 | orchestrator | Saturday 07 March 2026 00:46:50 +0000 (0:00:03.335) 0:00:35.065 ******** 2026-03-07 00:51:32.116034 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.116040 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.116045 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.116050 | orchestrator | 2026-03-07 00:51:32.116056 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-07 00:51:32.116061 | orchestrator | Saturday 07 March 2026 00:46:54 +0000 (0:00:04.100) 0:00:39.165 ******** 2026-03-07 00:51:32.116066 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.116072 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.116077 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.116082 | orchestrator | 2026-03-07 00:51:32.116088 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-07 00:51:32.116093 | orchestrator | Saturday 07 March 2026 00:46:56 +0000 (0:00:01.931) 0:00:41.097 ******** 2026-03-07 00:51:32.116098 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.116104 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.116109 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.116115 | orchestrator | 2026-03-07 00:51:32.116120 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-07 00:51:32.116125 | orchestrator | Saturday 07 March 2026 00:46:57 +0000 (0:00:00.988) 0:00:42.086 ******** 2026-03-07 00:51:32.116131 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.116136 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.116141 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.116146 | orchestrator | 2026-03-07 00:51:32.116152 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-07 00:51:32.116162 | orchestrator | Saturday 07 March 2026 00:46:58 +0000 (0:00:00.834) 0:00:42.920 ******** 2026-03-07 00:51:32.116168 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.116173 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.116178 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.116184 | orchestrator | 2026-03-07 00:51:32.116189 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-07 00:51:32.116195 | orchestrator | Saturday 07 March 2026 00:46:58 +0000 (0:00:00.579) 0:00:43.500 ******** 2026-03-07 00:51:32.116200 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.116205 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.116211 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.116216 | orchestrator | 2026-03-07 00:51:32.116221 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-07 00:51:32.116226 | orchestrator | Saturday 07 March 2026 00:47:00 +0000 (0:00:01.652) 0:00:45.153 ******** 2026-03-07 00:51:32.116232 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.116242 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.116247 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.116252 | orchestrator | 2026-03-07 00:51:32.116258 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-07 00:51:32.116263 | orchestrator | Saturday 07 March 2026 00:47:03 +0000 (0:00:03.133) 0:00:48.286 ******** 2026-03-07 00:51:32.116268 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:51:32.116274 | orchestrator | 2026-03-07 00:51:32.116279 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-07 00:51:32.116285 | orchestrator | Saturday 07 March 2026 00:47:04 +0000 (0:00:01.079) 0:00:49.365 ******** 2026-03-07 00:51:32.116290 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.116295 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.116300 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.116306 | orchestrator | 2026-03-07 00:51:32.116311 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-07 00:51:32.116317 | orchestrator | Saturday 07 March 2026 00:47:08 +0000 (0:00:03.591) 0:00:52.957 ******** 2026-03-07 00:51:32.116322 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.116347 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.116355 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.116361 | orchestrator | 2026-03-07 00:51:32.116366 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-07 00:51:32.116372 | orchestrator | Saturday 07 March 2026 00:47:09 +0000 (0:00:01.062) 0:00:54.020 ******** 2026-03-07 00:51:32.116377 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.116382 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.116388 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.116393 | orchestrator | 2026-03-07 00:51:32.116398 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-07 00:51:32.116404 | orchestrator | Saturday 07 March 2026 00:47:11 +0000 (0:00:01.864) 0:00:55.884 ******** 2026-03-07 00:51:32.116409 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.116414 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.116420 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.116425 | orchestrator | 2026-03-07 00:51:32.116432 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-07 00:51:32.116446 | orchestrator | Saturday 07 March 2026 00:47:12 +0000 (0:00:01.891) 0:00:57.775 ******** 2026-03-07 00:51:32.116455 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.116464 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.116472 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.116481 | orchestrator | 2026-03-07 00:51:32.116489 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-07 00:51:32.116497 | orchestrator | Saturday 07 March 2026 00:47:14 +0000 (0:00:01.423) 0:00:59.199 ******** 2026-03-07 00:51:32.116503 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.116508 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.116514 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.116519 | orchestrator | 2026-03-07 00:51:32.116525 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-07 00:51:32.116530 | orchestrator | Saturday 07 March 2026 00:47:15 +0000 (0:00:00.916) 0:01:00.116 ******** 2026-03-07 00:51:32.116536 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.116541 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.116546 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.116552 | orchestrator | 2026-03-07 00:51:32.116557 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-07 00:51:32.116562 | orchestrator | Saturday 07 March 2026 00:47:17 +0000 (0:00:02.404) 0:01:02.520 ******** 2026-03-07 00:51:32.116568 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.116573 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.116578 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.116590 | orchestrator | 2026-03-07 00:51:32.116595 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-07 00:51:32.116601 | orchestrator | Saturday 07 March 2026 00:47:20 +0000 (0:00:02.647) 0:01:05.167 ******** 2026-03-07 00:51:32.116606 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.116612 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.116617 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.116622 | orchestrator | 2026-03-07 00:51:32.116628 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-07 00:51:32.116633 | orchestrator | Saturday 07 March 2026 00:47:21 +0000 (0:00:01.050) 0:01:06.218 ******** 2026-03-07 00:51:32.116639 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-07 00:51:32.116645 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-07 00:51:32.116651 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-07 00:51:32.116661 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-07 00:51:32.116666 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-07 00:51:32.116672 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-07 00:51:32.116677 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-07 00:51:32.116682 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-07 00:51:32.116688 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-07 00:51:32.116693 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-07 00:51:32.116698 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-07 00:51:32.116704 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-07 00:51:32.116709 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.116715 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.116720 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.116725 | orchestrator | 2026-03-07 00:51:32.116731 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-07 00:51:32.116736 | orchestrator | Saturday 07 March 2026 00:48:05 +0000 (0:00:43.840) 0:01:50.058 ******** 2026-03-07 00:51:32.116742 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.116747 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.116752 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.116758 | orchestrator | 2026-03-07 00:51:32.116763 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-07 00:51:32.116768 | orchestrator | Saturday 07 March 2026 00:48:05 +0000 (0:00:00.772) 0:01:50.830 ******** 2026-03-07 00:51:32.116774 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.116779 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.116785 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.116790 | orchestrator | 2026-03-07 00:51:32.116795 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-07 00:51:32.116801 | orchestrator | Saturday 07 March 2026 00:48:07 +0000 (0:00:01.213) 0:01:52.044 ******** 2026-03-07 00:51:32.116814 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.116819 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.116824 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.116830 | orchestrator | 2026-03-07 00:51:32.116840 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-07 00:51:32.116845 | orchestrator | Saturday 07 March 2026 00:48:09 +0000 (0:00:01.956) 0:01:54.001 ******** 2026-03-07 00:51:32.116851 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.116856 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.116861 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.116867 | orchestrator | 2026-03-07 00:51:32.116872 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-07 00:51:32.116878 | orchestrator | Saturday 07 March 2026 00:48:34 +0000 (0:00:25.623) 0:02:19.625 ******** 2026-03-07 00:51:32.116883 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.116888 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.116894 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.116899 | orchestrator | 2026-03-07 00:51:32.116904 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-07 00:51:32.116910 | orchestrator | Saturday 07 March 2026 00:48:35 +0000 (0:00:00.923) 0:02:20.549 ******** 2026-03-07 00:51:32.116915 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.116921 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.116926 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.116931 | orchestrator | 2026-03-07 00:51:32.116937 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-07 00:51:32.116942 | orchestrator | Saturday 07 March 2026 00:48:36 +0000 (0:00:00.785) 0:02:21.334 ******** 2026-03-07 00:51:32.116948 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.116953 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.116958 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.116964 | orchestrator | 2026-03-07 00:51:32.116969 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-07 00:51:32.116974 | orchestrator | Saturday 07 March 2026 00:48:37 +0000 (0:00:00.741) 0:02:22.076 ******** 2026-03-07 00:51:32.116980 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.116985 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.116991 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.116996 | orchestrator | 2026-03-07 00:51:32.117004 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-07 00:51:32.117013 | orchestrator | Saturday 07 March 2026 00:48:38 +0000 (0:00:01.123) 0:02:23.200 ******** 2026-03-07 00:51:32.117022 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.117031 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.117039 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.117048 | orchestrator | 2026-03-07 00:51:32.117056 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-07 00:51:32.117065 | orchestrator | Saturday 07 March 2026 00:48:38 +0000 (0:00:00.409) 0:02:23.609 ******** 2026-03-07 00:51:32.117074 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.117147 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.117158 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.117167 | orchestrator | 2026-03-07 00:51:32.117177 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-07 00:51:32.117192 | orchestrator | Saturday 07 March 2026 00:48:39 +0000 (0:00:00.768) 0:02:24.378 ******** 2026-03-07 00:51:32.117202 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.117211 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.117221 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.117230 | orchestrator | 2026-03-07 00:51:32.117240 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-07 00:51:32.117249 | orchestrator | Saturday 07 March 2026 00:48:40 +0000 (0:00:00.804) 0:02:25.182 ******** 2026-03-07 00:51:32.117258 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.117267 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.117285 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.117293 | orchestrator | 2026-03-07 00:51:32.117302 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-07 00:51:32.117310 | orchestrator | Saturday 07 March 2026 00:48:42 +0000 (0:00:01.874) 0:02:27.057 ******** 2026-03-07 00:51:32.117319 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:32.117405 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:32.117417 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:32.117425 | orchestrator | 2026-03-07 00:51:32.117434 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-07 00:51:32.117442 | orchestrator | Saturday 07 March 2026 00:48:43 +0000 (0:00:00.990) 0:02:28.047 ******** 2026-03-07 00:51:32.117451 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.117460 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.117468 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.117477 | orchestrator | 2026-03-07 00:51:32.117486 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-07 00:51:32.117494 | orchestrator | Saturday 07 March 2026 00:48:43 +0000 (0:00:00.433) 0:02:28.481 ******** 2026-03-07 00:51:32.117502 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.117511 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.117519 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.117528 | orchestrator | 2026-03-07 00:51:32.117536 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-07 00:51:32.117545 | orchestrator | Saturday 07 March 2026 00:48:43 +0000 (0:00:00.333) 0:02:28.814 ******** 2026-03-07 00:51:32.117553 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.117562 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.117571 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.117580 | orchestrator | 2026-03-07 00:51:32.117588 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-07 00:51:32.117597 | orchestrator | Saturday 07 March 2026 00:48:44 +0000 (0:00:00.888) 0:02:29.703 ******** 2026-03-07 00:51:32.117606 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.117614 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.117623 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.117631 | orchestrator | 2026-03-07 00:51:32.117640 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-07 00:51:32.117649 | orchestrator | Saturday 07 March 2026 00:48:45 +0000 (0:00:00.676) 0:02:30.380 ******** 2026-03-07 00:51:32.117658 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-07 00:51:32.117680 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-07 00:51:32.117690 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-07 00:51:32.117699 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-07 00:51:32.117708 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-07 00:51:32.117717 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-07 00:51:32.117726 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-07 00:51:32.117735 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-07 00:51:32.117744 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-07 00:51:32.117753 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-07 00:51:32.117762 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-07 00:51:32.117770 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-07 00:51:32.117788 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-07 00:51:32.117798 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-07 00:51:32.117807 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-07 00:51:32.117816 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-07 00:51:32.117825 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-07 00:51:32.117834 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-07 00:51:32.117842 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-07 00:51:32.117851 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-07 00:51:32.117859 | orchestrator | 2026-03-07 00:51:32.117869 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-07 00:51:32.117878 | orchestrator | 2026-03-07 00:51:32.117901 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-07 00:51:32.117909 | orchestrator | Saturday 07 March 2026 00:48:48 +0000 (0:00:02.872) 0:02:33.253 ******** 2026-03-07 00:51:32.117914 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:32.117920 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:32.117925 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:32.117930 | orchestrator | 2026-03-07 00:51:32.117936 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-07 00:51:32.117941 | orchestrator | Saturday 07 March 2026 00:48:48 +0000 (0:00:00.483) 0:02:33.737 ******** 2026-03-07 00:51:32.117947 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:32.117952 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:32.117958 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:32.117963 | orchestrator | 2026-03-07 00:51:32.117969 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-07 00:51:32.117974 | orchestrator | Saturday 07 March 2026 00:48:49 +0000 (0:00:00.705) 0:02:34.442 ******** 2026-03-07 00:51:32.117979 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:32.117985 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:32.117990 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:32.117995 | orchestrator | 2026-03-07 00:51:32.118001 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-07 00:51:32.118006 | orchestrator | Saturday 07 March 2026 00:48:50 +0000 (0:00:00.420) 0:02:34.863 ******** 2026-03-07 00:51:32.118011 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:51:32.118063 | orchestrator | 2026-03-07 00:51:32.118069 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-07 00:51:32.118074 | orchestrator | Saturday 07 March 2026 00:48:50 +0000 (0:00:00.632) 0:02:35.496 ******** 2026-03-07 00:51:32.118080 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.118086 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.118091 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.118097 | orchestrator | 2026-03-07 00:51:32.118102 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-07 00:51:32.118107 | orchestrator | Saturday 07 March 2026 00:48:50 +0000 (0:00:00.304) 0:02:35.801 ******** 2026-03-07 00:51:32.118113 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.118118 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.118123 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.118129 | orchestrator | 2026-03-07 00:51:32.118134 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-07 00:51:32.118139 | orchestrator | Saturday 07 March 2026 00:48:51 +0000 (0:00:00.294) 0:02:36.095 ******** 2026-03-07 00:51:32.118145 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.118156 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.118165 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.118174 | orchestrator | 2026-03-07 00:51:32.118183 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-07 00:51:32.118193 | orchestrator | Saturday 07 March 2026 00:48:51 +0000 (0:00:00.278) 0:02:36.373 ******** 2026-03-07 00:51:32.118202 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:32.118212 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:32.118221 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:32.118229 | orchestrator | 2026-03-07 00:51:32.118247 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-07 00:51:32.118257 | orchestrator | Saturday 07 March 2026 00:48:52 +0000 (0:00:00.793) 0:02:37.167 ******** 2026-03-07 00:51:32.118267 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:32.118276 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:32.118285 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:32.118294 | orchestrator | 2026-03-07 00:51:32.118303 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-07 00:51:32.118312 | orchestrator | Saturday 07 March 2026 00:48:53 +0000 (0:00:01.043) 0:02:38.211 ******** 2026-03-07 00:51:32.118321 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:32.118353 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:32.118362 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:32.118371 | orchestrator | 2026-03-07 00:51:32.118380 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-07 00:51:32.118389 | orchestrator | Saturday 07 March 2026 00:48:54 +0000 (0:00:01.092) 0:02:39.303 ******** 2026-03-07 00:51:32.118397 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:32.118406 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:32.118415 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:32.118424 | orchestrator | 2026-03-07 00:51:32.118434 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-07 00:51:32.118443 | orchestrator | 2026-03-07 00:51:32.118453 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-07 00:51:32.118462 | orchestrator | Saturday 07 March 2026 00:49:05 +0000 (0:00:10.977) 0:02:50.280 ******** 2026-03-07 00:51:32.118471 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:32.118480 | orchestrator | 2026-03-07 00:51:32.118489 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-07 00:51:32.118498 | orchestrator | Saturday 07 March 2026 00:49:06 +0000 (0:00:01.020) 0:02:51.301 ******** 2026-03-07 00:51:32.118508 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:32.118517 | orchestrator | 2026-03-07 00:51:32.118526 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-07 00:51:32.118535 | orchestrator | Saturday 07 March 2026 00:49:07 +0000 (0:00:00.584) 0:02:51.885 ******** 2026-03-07 00:51:32.118544 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-07 00:51:32.118554 | orchestrator | 2026-03-07 00:51:32.118562 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-07 00:51:32.118571 | orchestrator | Saturday 07 March 2026 00:49:07 +0000 (0:00:00.617) 0:02:52.502 ******** 2026-03-07 00:51:32.118582 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:32.118590 | orchestrator | 2026-03-07 00:51:32.118599 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-07 00:51:32.118608 | orchestrator | Saturday 07 March 2026 00:49:08 +0000 (0:00:01.087) 0:02:53.590 ******** 2026-03-07 00:51:32.118617 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:32.118627 | orchestrator | 2026-03-07 00:51:32.118642 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-07 00:51:32.118652 | orchestrator | Saturday 07 March 2026 00:49:09 +0000 (0:00:00.947) 0:02:54.537 ******** 2026-03-07 00:51:32.118661 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-07 00:51:32.118672 | orchestrator | 2026-03-07 00:51:32.118682 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-07 00:51:32.118705 | orchestrator | Saturday 07 March 2026 00:49:11 +0000 (0:00:02.295) 0:02:56.832 ******** 2026-03-07 00:51:32.118715 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-07 00:51:32.118724 | orchestrator | 2026-03-07 00:51:32.118733 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-07 00:51:32.118742 | orchestrator | Saturday 07 March 2026 00:49:13 +0000 (0:00:01.096) 0:02:57.928 ******** 2026-03-07 00:51:32.118750 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:32.118759 | orchestrator | 2026-03-07 00:51:32.118767 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-07 00:51:32.118775 | orchestrator | Saturday 07 March 2026 00:49:14 +0000 (0:00:00.924) 0:02:58.853 ******** 2026-03-07 00:51:32.118785 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:32.118794 | orchestrator | 2026-03-07 00:51:32.118805 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-07 00:51:32.118814 | orchestrator | 2026-03-07 00:51:32.118824 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-07 00:51:32.118833 | orchestrator | Saturday 07 March 2026 00:49:14 +0000 (0:00:00.655) 0:02:59.508 ******** 2026-03-07 00:51:32.118842 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:32.118852 | orchestrator | 2026-03-07 00:51:32.118862 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-07 00:51:32.118870 | orchestrator | Saturday 07 March 2026 00:49:14 +0000 (0:00:00.198) 0:02:59.707 ******** 2026-03-07 00:51:32.118876 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:51:32.118882 | orchestrator | 2026-03-07 00:51:32.118887 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-07 00:51:32.118893 | orchestrator | Saturday 07 March 2026 00:49:15 +0000 (0:00:00.326) 0:03:00.034 ******** 2026-03-07 00:51:32.118898 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:32.118903 | orchestrator | 2026-03-07 00:51:32.118909 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-07 00:51:32.118914 | orchestrator | Saturday 07 March 2026 00:49:16 +0000 (0:00:01.579) 0:03:01.613 ******** 2026-03-07 00:51:32.118919 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:32.118925 | orchestrator | 2026-03-07 00:51:32.118930 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-07 00:51:32.118936 | orchestrator | Saturday 07 March 2026 00:49:18 +0000 (0:00:02.180) 0:03:03.794 ******** 2026-03-07 00:51:32.118941 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:32.118946 | orchestrator | 2026-03-07 00:51:32.118952 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-07 00:51:32.118957 | orchestrator | Saturday 07 March 2026 00:49:19 +0000 (0:00:01.032) 0:03:04.826 ******** 2026-03-07 00:51:32.118963 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:32.118968 | orchestrator | 2026-03-07 00:51:32.118983 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-07 00:51:32.118989 | orchestrator | Saturday 07 March 2026 00:49:20 +0000 (0:00:00.648) 0:03:05.475 ******** 2026-03-07 00:51:32.118994 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:32.119000 | orchestrator | 2026-03-07 00:51:32.119005 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-07 00:51:32.119011 | orchestrator | Saturday 07 March 2026 00:49:32 +0000 (0:00:12.065) 0:03:17.540 ******** 2026-03-07 00:51:32.119016 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:32.119021 | orchestrator | 2026-03-07 00:51:32.119027 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-07 00:51:32.119032 | orchestrator | Saturday 07 March 2026 00:49:48 +0000 (0:00:16.185) 0:03:33.725 ******** 2026-03-07 00:51:32.119038 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:32.119043 | orchestrator | 2026-03-07 00:51:32.119049 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-07 00:51:32.119054 | orchestrator | 2026-03-07 00:51:32.119065 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-07 00:51:32.119072 | orchestrator | Saturday 07 March 2026 00:49:49 +0000 (0:00:00.672) 0:03:34.398 ******** 2026-03-07 00:51:32.119077 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.119082 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.119088 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.119093 | orchestrator | 2026-03-07 00:51:32.119099 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-07 00:51:32.119104 | orchestrator | Saturday 07 March 2026 00:49:49 +0000 (0:00:00.412) 0:03:34.810 ******** 2026-03-07 00:51:32.119109 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.119115 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.119120 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.119125 | orchestrator | 2026-03-07 00:51:32.119131 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-07 00:51:32.119136 | orchestrator | Saturday 07 March 2026 00:49:50 +0000 (0:00:00.432) 0:03:35.243 ******** 2026-03-07 00:51:32.119142 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:51:32.119147 | orchestrator | 2026-03-07 00:51:32.119153 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-07 00:51:32.119158 | orchestrator | Saturday 07 March 2026 00:49:51 +0000 (0:00:00.901) 0:03:36.145 ******** 2026-03-07 00:51:32.119164 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-07 00:51:32.119169 | orchestrator | 2026-03-07 00:51:32.119175 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-07 00:51:32.119180 | orchestrator | Saturday 07 March 2026 00:49:52 +0000 (0:00:01.398) 0:03:37.543 ******** 2026-03-07 00:51:32.119186 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 00:51:32.119191 | orchestrator | 2026-03-07 00:51:32.119197 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-07 00:51:32.119202 | orchestrator | Saturday 07 March 2026 00:49:53 +0000 (0:00:01.278) 0:03:38.822 ******** 2026-03-07 00:51:32.119208 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.119213 | orchestrator | 2026-03-07 00:51:32.119219 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-07 00:51:32.119224 | orchestrator | Saturday 07 March 2026 00:49:54 +0000 (0:00:00.223) 0:03:39.046 ******** 2026-03-07 00:51:32.119230 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 00:51:32.119235 | orchestrator | 2026-03-07 00:51:32.119245 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-07 00:51:32.119254 | orchestrator | Saturday 07 March 2026 00:49:55 +0000 (0:00:01.396) 0:03:40.442 ******** 2026-03-07 00:51:32.119262 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.119271 | orchestrator | 2026-03-07 00:51:32.119281 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-07 00:51:32.119833 | orchestrator | Saturday 07 March 2026 00:49:55 +0000 (0:00:00.230) 0:03:40.673 ******** 2026-03-07 00:51:32.119873 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.119883 | orchestrator | 2026-03-07 00:51:32.119892 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-07 00:51:32.119901 | orchestrator | Saturday 07 March 2026 00:49:56 +0000 (0:00:00.254) 0:03:40.928 ******** 2026-03-07 00:51:32.119910 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.119918 | orchestrator | 2026-03-07 00:51:32.119926 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-07 00:51:32.119935 | orchestrator | Saturday 07 March 2026 00:49:56 +0000 (0:00:00.174) 0:03:41.102 ******** 2026-03-07 00:51:32.119944 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.119953 | orchestrator | 2026-03-07 00:51:32.119961 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-07 00:51:32.119970 | orchestrator | Saturday 07 March 2026 00:49:56 +0000 (0:00:00.237) 0:03:41.339 ******** 2026-03-07 00:51:32.119995 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-07 00:51:32.120004 | orchestrator | 2026-03-07 00:51:32.120017 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-07 00:51:32.120029 | orchestrator | Saturday 07 March 2026 00:50:03 +0000 (0:00:06.535) 0:03:47.874 ******** 2026-03-07 00:51:32.120038 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-07 00:51:32.120047 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-07 00:51:32.120057 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-07 00:51:32.120151 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-07 00:51:32.120162 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-07 00:51:32.120172 | orchestrator | 2026-03-07 00:51:32.120182 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-07 00:51:32.120192 | orchestrator | Saturday 07 March 2026 00:50:46 +0000 (0:00:43.403) 0:04:31.278 ******** 2026-03-07 00:51:32.120215 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 00:51:32.120224 | orchestrator | 2026-03-07 00:51:32.120235 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-07 00:51:32.120244 | orchestrator | Saturday 07 March 2026 00:50:47 +0000 (0:00:01.560) 0:04:32.838 ******** 2026-03-07 00:51:32.120255 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-07 00:51:32.120264 | orchestrator | 2026-03-07 00:51:32.120274 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-07 00:51:32.120284 | orchestrator | Saturday 07 March 2026 00:50:50 +0000 (0:00:02.539) 0:04:35.377 ******** 2026-03-07 00:51:32.120294 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-07 00:51:32.120304 | orchestrator | 2026-03-07 00:51:32.120313 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-07 00:51:32.120322 | orchestrator | Saturday 07 March 2026 00:50:52 +0000 (0:00:01.885) 0:04:37.263 ******** 2026-03-07 00:51:32.120381 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.120393 | orchestrator | 2026-03-07 00:51:32.120403 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-07 00:51:32.120412 | orchestrator | Saturday 07 March 2026 00:50:52 +0000 (0:00:00.193) 0:04:37.456 ******** 2026-03-07 00:51:32.120421 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-07 00:51:32.120431 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-07 00:51:32.120439 | orchestrator | 2026-03-07 00:51:32.120449 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-07 00:51:32.120458 | orchestrator | Saturday 07 March 2026 00:50:54 +0000 (0:00:02.356) 0:04:39.813 ******** 2026-03-07 00:51:32.120467 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.120476 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.120486 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.120496 | orchestrator | 2026-03-07 00:51:32.120504 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-07 00:51:32.120515 | orchestrator | Saturday 07 March 2026 00:50:55 +0000 (0:00:00.433) 0:04:40.247 ******** 2026-03-07 00:51:32.120525 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.120534 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.120543 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.120551 | orchestrator | 2026-03-07 00:51:32.120559 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-07 00:51:32.120568 | orchestrator | 2026-03-07 00:51:32.120577 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-07 00:51:32.120587 | orchestrator | Saturday 07 March 2026 00:50:56 +0000 (0:00:01.438) 0:04:41.685 ******** 2026-03-07 00:51:32.120595 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:32.120604 | orchestrator | 2026-03-07 00:51:32.120613 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-07 00:51:32.120631 | orchestrator | Saturday 07 March 2026 00:50:57 +0000 (0:00:00.339) 0:04:42.025 ******** 2026-03-07 00:51:32.120640 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:51:32.120653 | orchestrator | 2026-03-07 00:51:32.120663 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-07 00:51:32.120671 | orchestrator | Saturday 07 March 2026 00:50:57 +0000 (0:00:00.329) 0:04:42.354 ******** 2026-03-07 00:51:32.120680 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:32.120688 | orchestrator | 2026-03-07 00:51:32.120697 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-07 00:51:32.120706 | orchestrator | 2026-03-07 00:51:32.120715 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-07 00:51:32.120725 | orchestrator | Saturday 07 March 2026 00:51:03 +0000 (0:00:06.287) 0:04:48.642 ******** 2026-03-07 00:51:32.120734 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:32.120742 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:32.120751 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:32.120760 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:32.120769 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:32.120777 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:32.120787 | orchestrator | 2026-03-07 00:51:32.120795 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-07 00:51:32.120804 | orchestrator | Saturday 07 March 2026 00:51:05 +0000 (0:00:02.020) 0:04:50.662 ******** 2026-03-07 00:51:32.120815 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-07 00:51:32.120825 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-07 00:51:32.120834 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-07 00:51:32.120843 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-07 00:51:32.120851 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-07 00:51:32.120866 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-07 00:51:32.120875 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-07 00:51:32.120883 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-07 00:51:32.120893 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-07 00:51:32.120903 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-07 00:51:32.120911 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-07 00:51:32.120918 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-07 00:51:32.120932 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-07 00:51:32.120940 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-07 00:51:32.120948 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-07 00:51:32.120957 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-07 00:51:32.120965 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-07 00:51:32.120974 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-07 00:51:32.120982 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-07 00:51:32.120989 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-07 00:51:32.120997 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-07 00:51:32.121014 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-07 00:51:32.121022 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-07 00:51:32.121030 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-07 00:51:32.121039 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-07 00:51:32.121047 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-07 00:51:32.121054 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-07 00:51:32.121062 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-07 00:51:32.121069 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-07 00:51:32.121077 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-07 00:51:32.121084 | orchestrator | 2026-03-07 00:51:32.121092 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-07 00:51:32.121100 | orchestrator | Saturday 07 March 2026 00:51:29 +0000 (0:00:23.760) 0:05:14.423 ******** 2026-03-07 00:51:32.121108 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.121116 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.121125 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.121133 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.121141 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.121148 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.121156 | orchestrator | 2026-03-07 00:51:32.121164 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-07 00:51:32.121172 | orchestrator | Saturday 07 March 2026 00:51:30 +0000 (0:00:00.843) 0:05:15.267 ******** 2026-03-07 00:51:32.121180 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:32.121187 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:32.121195 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:32.121203 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:32.121211 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:32.121219 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:32.121227 | orchestrator | 2026-03-07 00:51:32.121238 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:51:32.121246 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:51:32.121257 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-07 00:51:32.121266 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-07 00:51:32.121273 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-07 00:51:32.121281 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-07 00:51:32.121290 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-07 00:51:32.121301 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-07 00:51:32.121309 | orchestrator | 2026-03-07 00:51:32.121317 | orchestrator | 2026-03-07 00:51:32.121326 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:51:32.121353 | orchestrator | Saturday 07 March 2026 00:51:30 +0000 (0:00:00.563) 0:05:15.830 ******** 2026-03-07 00:51:32.121361 | orchestrator | =============================================================================== 2026-03-07 00:51:32.121368 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.84s 2026-03-07 00:51:32.121375 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.40s 2026-03-07 00:51:32.121383 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.62s 2026-03-07 00:51:32.121397 | orchestrator | Manage labels ---------------------------------------------------------- 23.76s 2026-03-07 00:51:32.121405 | orchestrator | kubectl : Install required packages ------------------------------------ 16.19s 2026-03-07 00:51:32.121414 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 12.07s 2026-03-07 00:51:32.121423 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.98s 2026-03-07 00:51:32.121430 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.53s 2026-03-07 00:51:32.121437 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.29s 2026-03-07 00:51:32.121445 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.82s 2026-03-07 00:51:32.121453 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 4.10s 2026-03-07 00:51:32.121460 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.59s 2026-03-07 00:51:32.121467 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.35s 2026-03-07 00:51:32.121474 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.34s 2026-03-07 00:51:32.121482 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 3.13s 2026-03-07 00:51:32.121489 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.99s 2026-03-07 00:51:32.121496 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.87s 2026-03-07 00:51:32.121504 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.65s 2026-03-07 00:51:32.121512 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.61s 2026-03-07 00:51:32.121520 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.54s 2026-03-07 00:51:32.121528 | orchestrator | 2026-03-07 00:51:32 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:32.121662 | orchestrator | 2026-03-07 00:51:32 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:32.121673 | orchestrator | 2026-03-07 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:35.187688 | orchestrator | 2026-03-07 00:51:35 | INFO  | Task e7d8d39e-b6f4-40a7-8299-74f530eb746a is in state STARTED 2026-03-07 00:51:35.191158 | orchestrator | 2026-03-07 00:51:35 | INFO  | Task db65a241-cc10-4767-b020-49741a82d45e is in state STARTED 2026-03-07 00:51:35.192489 | orchestrator | 2026-03-07 00:51:35 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:35.194817 | orchestrator | 2026-03-07 00:51:35 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:35.203949 | orchestrator | 2026-03-07 00:51:35 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:35.205260 | orchestrator | 2026-03-07 00:51:35 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:35.207126 | orchestrator | 2026-03-07 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:38.252242 | orchestrator | 2026-03-07 00:51:38 | INFO  | Task e7d8d39e-b6f4-40a7-8299-74f530eb746a is in state STARTED 2026-03-07 00:51:38.254933 | orchestrator | 2026-03-07 00:51:38 | INFO  | Task db65a241-cc10-4767-b020-49741a82d45e is in state STARTED 2026-03-07 00:51:38.256122 | orchestrator | 2026-03-07 00:51:38 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:38.258914 | orchestrator | 2026-03-07 00:51:38 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:38.261579 | orchestrator | 2026-03-07 00:51:38 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:38.264184 | orchestrator | 2026-03-07 00:51:38 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:38.264240 | orchestrator | 2026-03-07 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:41.355291 | orchestrator | 2026-03-07 00:51:41 | INFO  | Task e7d8d39e-b6f4-40a7-8299-74f530eb746a is in state STARTED 2026-03-07 00:51:41.355398 | orchestrator | 2026-03-07 00:51:41 | INFO  | Task db65a241-cc10-4767-b020-49741a82d45e is in state STARTED 2026-03-07 00:51:41.355405 | orchestrator | 2026-03-07 00:51:41 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:41.355409 | orchestrator | 2026-03-07 00:51:41 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:41.357527 | orchestrator | 2026-03-07 00:51:41 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:41.358410 | orchestrator | 2026-03-07 00:51:41 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:41.358460 | orchestrator | 2026-03-07 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:44.460271 | orchestrator | 2026-03-07 00:51:44 | INFO  | Task e7d8d39e-b6f4-40a7-8299-74f530eb746a is in state SUCCESS 2026-03-07 00:51:44.461478 | orchestrator | 2026-03-07 00:51:44 | INFO  | Task db65a241-cc10-4767-b020-49741a82d45e is in state STARTED 2026-03-07 00:51:44.462863 | orchestrator | 2026-03-07 00:51:44 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:44.465833 | orchestrator | 2026-03-07 00:51:44 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:44.466146 | orchestrator | 2026-03-07 00:51:44 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:44.467618 | orchestrator | 2026-03-07 00:51:44 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:44.467654 | orchestrator | 2026-03-07 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:47.607082 | orchestrator | 2026-03-07 00:51:47 | INFO  | Task db65a241-cc10-4767-b020-49741a82d45e is in state STARTED 2026-03-07 00:51:47.607184 | orchestrator | 2026-03-07 00:51:47 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:47.607199 | orchestrator | 2026-03-07 00:51:47 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:47.607210 | orchestrator | 2026-03-07 00:51:47 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:47.607221 | orchestrator | 2026-03-07 00:51:47 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:47.607229 | orchestrator | 2026-03-07 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:50.615821 | orchestrator | 2026-03-07 00:51:50 | INFO  | Task db65a241-cc10-4767-b020-49741a82d45e is in state SUCCESS 2026-03-07 00:51:50.618434 | orchestrator | 2026-03-07 00:51:50 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:50.619492 | orchestrator | 2026-03-07 00:51:50 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:50.622040 | orchestrator | 2026-03-07 00:51:50 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:50.623241 | orchestrator | 2026-03-07 00:51:50 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:50.623267 | orchestrator | 2026-03-07 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:53.662563 | orchestrator | 2026-03-07 00:51:53 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:53.663869 | orchestrator | 2026-03-07 00:51:53 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:53.667896 | orchestrator | 2026-03-07 00:51:53 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:53.668059 | orchestrator | 2026-03-07 00:51:53 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:53.668156 | orchestrator | 2026-03-07 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:56.704338 | orchestrator | 2026-03-07 00:51:56 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:56.704729 | orchestrator | 2026-03-07 00:51:56 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:56.705835 | orchestrator | 2026-03-07 00:51:56 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:56.707542 | orchestrator | 2026-03-07 00:51:56 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:56.707659 | orchestrator | 2026-03-07 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:59.736948 | orchestrator | 2026-03-07 00:51:59 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:51:59.737545 | orchestrator | 2026-03-07 00:51:59 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:51:59.738461 | orchestrator | 2026-03-07 00:51:59 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:51:59.739659 | orchestrator | 2026-03-07 00:51:59 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:51:59.739706 | orchestrator | 2026-03-07 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:02.805613 | orchestrator | 2026-03-07 00:52:02 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:02.806236 | orchestrator | 2026-03-07 00:52:02 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:02.807280 | orchestrator | 2026-03-07 00:52:02 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:02.808348 | orchestrator | 2026-03-07 00:52:02 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:52:02.808383 | orchestrator | 2026-03-07 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:05.858311 | orchestrator | 2026-03-07 00:52:05 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:05.858723 | orchestrator | 2026-03-07 00:52:05 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:05.860215 | orchestrator | 2026-03-07 00:52:05 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:05.862467 | orchestrator | 2026-03-07 00:52:05 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:52:05.862546 | orchestrator | 2026-03-07 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:08.900994 | orchestrator | 2026-03-07 00:52:08 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:08.901596 | orchestrator | 2026-03-07 00:52:08 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:08.902602 | orchestrator | 2026-03-07 00:52:08 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:08.904615 | orchestrator | 2026-03-07 00:52:08 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state STARTED 2026-03-07 00:52:08.905437 | orchestrator | 2026-03-07 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:11.948275 | orchestrator | 2026-03-07 00:52:11 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:11.948719 | orchestrator | 2026-03-07 00:52:11 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:11.949441 | orchestrator | 2026-03-07 00:52:11 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:11.950691 | orchestrator | 2026-03-07 00:52:11 | INFO  | Task 171b352d-5066-42cc-9db4-5c9db626783c is in state SUCCESS 2026-03-07 00:52:11.950776 | orchestrator | 2026-03-07 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:11.952183 | orchestrator | 2026-03-07 00:52:11.952225 | orchestrator | 2026-03-07 00:52:11.952237 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-07 00:52:11.952248 | orchestrator | 2026-03-07 00:52:11.952258 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-07 00:52:11.952269 | orchestrator | Saturday 07 March 2026 00:51:38 +0000 (0:00:00.255) 0:00:00.255 ******** 2026-03-07 00:52:11.952280 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-07 00:52:11.952291 | orchestrator | 2026-03-07 00:52:11.952310 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-07 00:52:11.952328 | orchestrator | Saturday 07 March 2026 00:51:39 +0000 (0:00:00.932) 0:00:01.188 ******** 2026-03-07 00:52:11.952347 | orchestrator | changed: [testbed-manager] 2026-03-07 00:52:11.952364 | orchestrator | 2026-03-07 00:52:11.952378 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-07 00:52:11.952389 | orchestrator | Saturday 07 March 2026 00:51:40 +0000 (0:00:01.771) 0:00:02.959 ******** 2026-03-07 00:52:11.952399 | orchestrator | changed: [testbed-manager] 2026-03-07 00:52:11.952409 | orchestrator | 2026-03-07 00:52:11.952455 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:52:11.952472 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:52:11.952491 | orchestrator | 2026-03-07 00:52:11.952506 | orchestrator | 2026-03-07 00:52:11.952522 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:52:11.952539 | orchestrator | Saturday 07 March 2026 00:51:41 +0000 (0:00:00.659) 0:00:03.619 ******** 2026-03-07 00:52:11.952576 | orchestrator | =============================================================================== 2026-03-07 00:52:11.952591 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.77s 2026-03-07 00:52:11.952601 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.93s 2026-03-07 00:52:11.952611 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.66s 2026-03-07 00:52:11.952624 | orchestrator | 2026-03-07 00:52:11.952640 | orchestrator | 2026-03-07 00:52:11.952656 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-07 00:52:11.952671 | orchestrator | 2026-03-07 00:52:11.952689 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-07 00:52:11.952705 | orchestrator | Saturday 07 March 2026 00:51:38 +0000 (0:00:00.230) 0:00:00.230 ******** 2026-03-07 00:52:11.952747 | orchestrator | ok: [testbed-manager] 2026-03-07 00:52:11.952759 | orchestrator | 2026-03-07 00:52:11.952770 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-07 00:52:11.952782 | orchestrator | Saturday 07 March 2026 00:51:38 +0000 (0:00:00.715) 0:00:00.946 ******** 2026-03-07 00:52:11.952793 | orchestrator | ok: [testbed-manager] 2026-03-07 00:52:11.952804 | orchestrator | 2026-03-07 00:52:11.952816 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-07 00:52:11.952826 | orchestrator | Saturday 07 March 2026 00:51:39 +0000 (0:00:00.694) 0:00:01.640 ******** 2026-03-07 00:52:11.952837 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-07 00:52:11.952849 | orchestrator | 2026-03-07 00:52:11.952860 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-07 00:52:11.952871 | orchestrator | Saturday 07 March 2026 00:51:40 +0000 (0:00:00.859) 0:00:02.500 ******** 2026-03-07 00:52:11.952882 | orchestrator | changed: [testbed-manager] 2026-03-07 00:52:11.952894 | orchestrator | 2026-03-07 00:52:11.952905 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-07 00:52:11.952916 | orchestrator | Saturday 07 March 2026 00:51:43 +0000 (0:00:02.755) 0:00:05.255 ******** 2026-03-07 00:52:11.952927 | orchestrator | changed: [testbed-manager] 2026-03-07 00:52:11.952938 | orchestrator | 2026-03-07 00:52:11.952949 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-07 00:52:11.952958 | orchestrator | Saturday 07 March 2026 00:51:43 +0000 (0:00:00.749) 0:00:06.005 ******** 2026-03-07 00:52:11.952968 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-07 00:52:11.952978 | orchestrator | 2026-03-07 00:52:11.952989 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-07 00:52:11.953005 | orchestrator | Saturday 07 March 2026 00:51:45 +0000 (0:00:02.064) 0:00:08.069 ******** 2026-03-07 00:52:11.953021 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-07 00:52:11.953037 | orchestrator | 2026-03-07 00:52:11.953053 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-07 00:52:11.953068 | orchestrator | Saturday 07 March 2026 00:51:47 +0000 (0:00:01.392) 0:00:09.462 ******** 2026-03-07 00:52:11.953085 | orchestrator | ok: [testbed-manager] 2026-03-07 00:52:11.953103 | orchestrator | 2026-03-07 00:52:11.953119 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-07 00:52:11.953135 | orchestrator | Saturday 07 March 2026 00:51:47 +0000 (0:00:00.564) 0:00:10.026 ******** 2026-03-07 00:52:11.953150 | orchestrator | ok: [testbed-manager] 2026-03-07 00:52:11.953160 | orchestrator | 2026-03-07 00:52:11.953172 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:52:11.953187 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:52:11.953202 | orchestrator | 2026-03-07 00:52:11.953218 | orchestrator | 2026-03-07 00:52:11.953233 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:52:11.953249 | orchestrator | Saturday 07 March 2026 00:51:48 +0000 (0:00:00.393) 0:00:10.419 ******** 2026-03-07 00:52:11.953264 | orchestrator | =============================================================================== 2026-03-07 00:52:11.953279 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.76s 2026-03-07 00:52:11.953294 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.06s 2026-03-07 00:52:11.953310 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.39s 2026-03-07 00:52:11.953344 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.86s 2026-03-07 00:52:11.953362 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.75s 2026-03-07 00:52:11.953378 | orchestrator | Get home directory of operator user ------------------------------------- 0.72s 2026-03-07 00:52:11.953394 | orchestrator | Create .kube directory -------------------------------------------------- 0.69s 2026-03-07 00:52:11.953484 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.56s 2026-03-07 00:52:11.953498 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.39s 2026-03-07 00:52:11.953508 | orchestrator | 2026-03-07 00:52:11.953518 | orchestrator | 2026-03-07 00:52:11.953528 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-07 00:52:11.953537 | orchestrator | 2026-03-07 00:52:11.953547 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-07 00:52:11.953557 | orchestrator | Saturday 07 March 2026 00:49:35 +0000 (0:00:00.272) 0:00:00.272 ******** 2026-03-07 00:52:11.953567 | orchestrator | ok: [localhost] => { 2026-03-07 00:52:11.953578 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-07 00:52:11.953588 | orchestrator | } 2026-03-07 00:52:11.953598 | orchestrator | 2026-03-07 00:52:11.953608 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-07 00:52:11.953618 | orchestrator | Saturday 07 March 2026 00:49:35 +0000 (0:00:00.141) 0:00:00.414 ******** 2026-03-07 00:52:11.953637 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-07 00:52:11.953648 | orchestrator | ...ignoring 2026-03-07 00:52:11.953658 | orchestrator | 2026-03-07 00:52:11.953668 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-07 00:52:11.953678 | orchestrator | Saturday 07 March 2026 00:49:38 +0000 (0:00:03.273) 0:00:03.687 ******** 2026-03-07 00:52:11.953687 | orchestrator | skipping: [localhost] 2026-03-07 00:52:11.953697 | orchestrator | 2026-03-07 00:52:11.953706 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-07 00:52:11.953716 | orchestrator | Saturday 07 March 2026 00:49:38 +0000 (0:00:00.063) 0:00:03.751 ******** 2026-03-07 00:52:11.953725 | orchestrator | ok: [localhost] 2026-03-07 00:52:11.953735 | orchestrator | 2026-03-07 00:52:11.953745 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:52:11.953754 | orchestrator | 2026-03-07 00:52:11.953764 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:52:11.953774 | orchestrator | Saturday 07 March 2026 00:49:39 +0000 (0:00:00.243) 0:00:03.995 ******** 2026-03-07 00:52:11.953784 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:52:11.953794 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:52:11.953804 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:52:11.953813 | orchestrator | 2026-03-07 00:52:11.953823 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:52:11.953832 | orchestrator | Saturday 07 March 2026 00:49:39 +0000 (0:00:00.424) 0:00:04.420 ******** 2026-03-07 00:52:11.953842 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-07 00:52:11.953852 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-07 00:52:11.953861 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-07 00:52:11.953872 | orchestrator | 2026-03-07 00:52:11.953882 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-07 00:52:11.953891 | orchestrator | 2026-03-07 00:52:11.953901 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-07 00:52:11.953911 | orchestrator | Saturday 07 March 2026 00:49:40 +0000 (0:00:00.787) 0:00:05.207 ******** 2026-03-07 00:52:11.953922 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:52:11.953932 | orchestrator | 2026-03-07 00:52:11.953941 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-07 00:52:11.953951 | orchestrator | Saturday 07 March 2026 00:49:41 +0000 (0:00:00.827) 0:00:06.035 ******** 2026-03-07 00:52:11.953961 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:52:11.953970 | orchestrator | 2026-03-07 00:52:11.953980 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-07 00:52:11.953996 | orchestrator | Saturday 07 March 2026 00:49:42 +0000 (0:00:01.583) 0:00:07.619 ******** 2026-03-07 00:52:11.954006 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:11.954060 | orchestrator | 2026-03-07 00:52:11.954070 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-07 00:52:11.954078 | orchestrator | Saturday 07 March 2026 00:49:44 +0000 (0:00:01.308) 0:00:08.927 ******** 2026-03-07 00:52:11.954086 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:11.954093 | orchestrator | 2026-03-07 00:52:11.954102 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-07 00:52:11.954110 | orchestrator | Saturday 07 March 2026 00:49:44 +0000 (0:00:00.418) 0:00:09.345 ******** 2026-03-07 00:52:11.954117 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:11.954125 | orchestrator | 2026-03-07 00:52:11.954133 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-07 00:52:11.954141 | orchestrator | Saturday 07 March 2026 00:49:45 +0000 (0:00:01.145) 0:00:10.491 ******** 2026-03-07 00:52:11.954149 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:11.954157 | orchestrator | 2026-03-07 00:52:11.954165 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-07 00:52:11.954172 | orchestrator | Saturday 07 March 2026 00:49:47 +0000 (0:00:01.913) 0:00:12.404 ******** 2026-03-07 00:52:11.954180 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:52:11.954188 | orchestrator | 2026-03-07 00:52:11.954196 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-07 00:52:11.954213 | orchestrator | Saturday 07 March 2026 00:49:48 +0000 (0:00:01.439) 0:00:13.844 ******** 2026-03-07 00:52:11.954222 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:52:11.954229 | orchestrator | 2026-03-07 00:52:11.954238 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-07 00:52:11.954246 | orchestrator | Saturday 07 March 2026 00:49:49 +0000 (0:00:00.978) 0:00:14.822 ******** 2026-03-07 00:52:11.954254 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:11.954261 | orchestrator | 2026-03-07 00:52:11.954269 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-07 00:52:11.954277 | orchestrator | Saturday 07 March 2026 00:49:50 +0000 (0:00:00.585) 0:00:15.407 ******** 2026-03-07 00:52:11.954285 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:11.954293 | orchestrator | 2026-03-07 00:52:11.954301 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-07 00:52:11.954308 | orchestrator | Saturday 07 March 2026 00:49:51 +0000 (0:00:00.576) 0:00:15.984 ******** 2026-03-07 00:52:11.954328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:11.954342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:11.954358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:11.954368 | orchestrator | 2026-03-07 00:52:11.954376 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-07 00:52:11.954384 | orchestrator | Saturday 07 March 2026 00:49:52 +0000 (0:00:01.449) 0:00:17.433 ******** 2026-03-07 00:52:11.954400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:11.954436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:11.954452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:11.954461 | orchestrator | 2026-03-07 00:52:11.954469 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-07 00:52:11.954477 | orchestrator | Saturday 07 March 2026 00:49:56 +0000 (0:00:04.243) 0:00:21.677 ******** 2026-03-07 00:52:11.954485 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-07 00:52:11.954493 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-07 00:52:11.954501 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-07 00:52:11.954509 | orchestrator | 2026-03-07 00:52:11.954517 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-07 00:52:11.954525 | orchestrator | Saturday 07 March 2026 00:50:00 +0000 (0:00:03.799) 0:00:25.476 ******** 2026-03-07 00:52:11.954533 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-07 00:52:11.954540 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-07 00:52:11.954548 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-07 00:52:11.954556 | orchestrator | 2026-03-07 00:52:11.954564 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-07 00:52:11.954577 | orchestrator | Saturday 07 March 2026 00:50:03 +0000 (0:00:03.142) 0:00:28.620 ******** 2026-03-07 00:52:11.954585 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-07 00:52:11.954593 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-07 00:52:11.954601 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-07 00:52:11.954608 | orchestrator | 2026-03-07 00:52:11.954617 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-07 00:52:11.954624 | orchestrator | Saturday 07 March 2026 00:50:06 +0000 (0:00:02.310) 0:00:30.930 ******** 2026-03-07 00:52:11.954632 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-07 00:52:11.954640 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-07 00:52:11.954648 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-07 00:52:11.954656 | orchestrator | 2026-03-07 00:52:11.954664 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-07 00:52:11.954671 | orchestrator | Saturday 07 March 2026 00:50:07 +0000 (0:00:01.809) 0:00:32.740 ******** 2026-03-07 00:52:11.954685 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-07 00:52:11.954693 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-07 00:52:11.954705 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-07 00:52:11.954713 | orchestrator | 2026-03-07 00:52:11.954721 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-07 00:52:11.954729 | orchestrator | Saturday 07 March 2026 00:50:09 +0000 (0:00:01.792) 0:00:34.533 ******** 2026-03-07 00:52:11.954737 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-07 00:52:11.954744 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-07 00:52:11.954752 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-07 00:52:11.954760 | orchestrator | 2026-03-07 00:52:11.954768 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-07 00:52:11.954776 | orchestrator | Saturday 07 March 2026 00:50:11 +0000 (0:00:01.941) 0:00:36.474 ******** 2026-03-07 00:52:11.954784 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:11.954792 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:52:11.954800 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:52:11.954808 | orchestrator | 2026-03-07 00:52:11.954816 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-07 00:52:11.954823 | orchestrator | Saturday 07 March 2026 00:50:12 +0000 (0:00:00.912) 0:00:37.387 ******** 2026-03-07 00:52:11.954832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:11.954847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:11.954861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:11.954876 | orchestrator | 2026-03-07 00:52:11.954884 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-07 00:52:11.954892 | orchestrator | Saturday 07 March 2026 00:50:16 +0000 (0:00:03.603) 0:00:40.990 ******** 2026-03-07 00:52:11.954900 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:52:11.954908 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:52:11.954916 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:52:11.954923 | orchestrator | 2026-03-07 00:52:11.954931 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-07 00:52:11.954940 | orchestrator | Saturday 07 March 2026 00:50:18 +0000 (0:00:01.978) 0:00:42.969 ******** 2026-03-07 00:52:11.954947 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:52:11.954955 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:52:11.954963 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:52:11.954971 | orchestrator | 2026-03-07 00:52:11.954980 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-07 00:52:11.954988 | orchestrator | Saturday 07 March 2026 00:50:25 +0000 (0:00:07.824) 0:00:50.793 ******** 2026-03-07 00:52:11.954996 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:52:11.955004 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:52:11.955011 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:52:11.955019 | orchestrator | 2026-03-07 00:52:11.955027 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-07 00:52:11.955035 | orchestrator | 2026-03-07 00:52:11.955043 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-07 00:52:11.955051 | orchestrator | Saturday 07 March 2026 00:50:26 +0000 (0:00:00.814) 0:00:51.608 ******** 2026-03-07 00:52:11.955059 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:52:11.955067 | orchestrator | 2026-03-07 00:52:11.955075 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-07 00:52:11.955084 | orchestrator | Saturday 07 March 2026 00:50:27 +0000 (0:00:00.683) 0:00:52.291 ******** 2026-03-07 00:52:11.955091 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:11.955099 | orchestrator | 2026-03-07 00:52:11.955107 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-07 00:52:11.955115 | orchestrator | Saturday 07 March 2026 00:50:27 +0000 (0:00:00.358) 0:00:52.650 ******** 2026-03-07 00:52:11.955123 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:52:11.955130 | orchestrator | 2026-03-07 00:52:11.955138 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-07 00:52:11.955146 | orchestrator | Saturday 07 March 2026 00:50:29 +0000 (0:00:01.843) 0:00:54.493 ******** 2026-03-07 00:52:11.955154 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:52:11.955162 | orchestrator | 2026-03-07 00:52:11.955170 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-07 00:52:11.955178 | orchestrator | 2026-03-07 00:52:11.955186 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-07 00:52:11.955194 | orchestrator | Saturday 07 March 2026 00:51:25 +0000 (0:00:55.436) 0:01:49.930 ******** 2026-03-07 00:52:11.955202 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:52:11.955209 | orchestrator | 2026-03-07 00:52:11.955223 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-07 00:52:11.955231 | orchestrator | Saturday 07 March 2026 00:51:26 +0000 (0:00:00.975) 0:01:50.905 ******** 2026-03-07 00:52:11.955239 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:52:11.955247 | orchestrator | 2026-03-07 00:52:11.955255 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-07 00:52:11.955263 | orchestrator | Saturday 07 March 2026 00:51:27 +0000 (0:00:01.629) 0:01:52.534 ******** 2026-03-07 00:52:11.955271 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:52:11.955279 | orchestrator | 2026-03-07 00:52:11.955287 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-07 00:52:11.955295 | orchestrator | Saturday 07 March 2026 00:51:34 +0000 (0:00:06.949) 0:01:59.484 ******** 2026-03-07 00:52:11.955303 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:52:11.955311 | orchestrator | 2026-03-07 00:52:11.955318 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-07 00:52:11.955326 | orchestrator | 2026-03-07 00:52:11.955334 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-07 00:52:11.955342 | orchestrator | Saturday 07 March 2026 00:51:45 +0000 (0:00:11.023) 0:02:10.508 ******** 2026-03-07 00:52:11.955350 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:52:11.955358 | orchestrator | 2026-03-07 00:52:11.955371 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-07 00:52:11.955380 | orchestrator | Saturday 07 March 2026 00:51:46 +0000 (0:00:00.964) 0:02:11.473 ******** 2026-03-07 00:52:11.955387 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:52:11.955395 | orchestrator | 2026-03-07 00:52:11.955403 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-07 00:52:11.955465 | orchestrator | Saturday 07 March 2026 00:51:47 +0000 (0:00:00.926) 0:02:12.400 ******** 2026-03-07 00:52:11.955477 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:52:11.955485 | orchestrator | 2026-03-07 00:52:11.955493 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-07 00:52:11.955501 | orchestrator | Saturday 07 March 2026 00:51:49 +0000 (0:00:02.043) 0:02:14.444 ******** 2026-03-07 00:52:11.955509 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:52:11.955517 | orchestrator | 2026-03-07 00:52:11.955524 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-07 00:52:11.955532 | orchestrator | 2026-03-07 00:52:11.955540 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-07 00:52:11.955548 | orchestrator | Saturday 07 March 2026 00:52:07 +0000 (0:00:17.721) 0:02:32.165 ******** 2026-03-07 00:52:11.955556 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:52:11.955564 | orchestrator | 2026-03-07 00:52:11.955572 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-07 00:52:11.955580 | orchestrator | Saturday 07 March 2026 00:52:08 +0000 (0:00:00.769) 0:02:32.935 ******** 2026-03-07 00:52:11.955587 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:52:11.955595 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:52:11.955603 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:52:11.955622 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-07 00:52:11.955631 | orchestrator | enable_outward_rabbitmq_True 2026-03-07 00:52:11.955639 | orchestrator | 2026-03-07 00:52:11.955647 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-07 00:52:11.955655 | orchestrator | skipping: no hosts matched 2026-03-07 00:52:11.955664 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-07 00:52:11.955672 | orchestrator | outward_rabbitmq_restart 2026-03-07 00:52:11.955679 | orchestrator | 2026-03-07 00:52:11.955687 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-07 00:52:11.955696 | orchestrator | skipping: no hosts matched 2026-03-07 00:52:11.955704 | orchestrator | 2026-03-07 00:52:11.955712 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-07 00:52:11.955727 | orchestrator | skipping: no hosts matched 2026-03-07 00:52:11.955735 | orchestrator | 2026-03-07 00:52:11.955742 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:52:11.955750 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-07 00:52:11.955759 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-07 00:52:11.955767 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:52:11.955775 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:52:11.955783 | orchestrator | 2026-03-07 00:52:11.955791 | orchestrator | 2026-03-07 00:52:11.955799 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:52:11.955808 | orchestrator | Saturday 07 March 2026 00:52:10 +0000 (0:00:02.532) 0:02:35.467 ******** 2026-03-07 00:52:11.955815 | orchestrator | =============================================================================== 2026-03-07 00:52:11.955823 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.18s 2026-03-07 00:52:11.955831 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.84s 2026-03-07 00:52:11.955839 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.82s 2026-03-07 00:52:11.955847 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.24s 2026-03-07 00:52:11.955855 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.80s 2026-03-07 00:52:11.955863 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 3.60s 2026-03-07 00:52:11.955871 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.27s 2026-03-07 00:52:11.955879 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.14s 2026-03-07 00:52:11.955887 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 2.91s 2026-03-07 00:52:11.955894 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.63s 2026-03-07 00:52:11.955903 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.53s 2026-03-07 00:52:11.955911 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.31s 2026-03-07 00:52:11.955919 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.98s 2026-03-07 00:52:11.955926 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.94s 2026-03-07 00:52:11.955934 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.91s 2026-03-07 00:52:11.955942 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.81s 2026-03-07 00:52:11.955950 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.79s 2026-03-07 00:52:11.955964 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.58s 2026-03-07 00:52:11.955972 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.45s 2026-03-07 00:52:11.955980 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.44s 2026-03-07 00:52:14.996785 | orchestrator | 2026-03-07 00:52:14 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:14.996907 | orchestrator | 2026-03-07 00:52:14 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:14.996923 | orchestrator | 2026-03-07 00:52:14 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:14.996935 | orchestrator | 2026-03-07 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:18.046902 | orchestrator | 2026-03-07 00:52:18 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:18.049240 | orchestrator | 2026-03-07 00:52:18 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:18.051959 | orchestrator | 2026-03-07 00:52:18 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:18.052034 | orchestrator | 2026-03-07 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:21.096081 | orchestrator | 2026-03-07 00:52:21 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:21.098653 | orchestrator | 2026-03-07 00:52:21 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:21.101796 | orchestrator | 2026-03-07 00:52:21 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:21.102499 | orchestrator | 2026-03-07 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:24.154392 | orchestrator | 2026-03-07 00:52:24 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:24.157525 | orchestrator | 2026-03-07 00:52:24 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:24.160080 | orchestrator | 2026-03-07 00:52:24 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:24.160171 | orchestrator | 2026-03-07 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:27.207272 | orchestrator | 2026-03-07 00:52:27 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:27.207888 | orchestrator | 2026-03-07 00:52:27 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:27.208800 | orchestrator | 2026-03-07 00:52:27 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:27.209660 | orchestrator | 2026-03-07 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:30.247930 | orchestrator | 2026-03-07 00:52:30 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:30.248845 | orchestrator | 2026-03-07 00:52:30 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:30.250960 | orchestrator | 2026-03-07 00:52:30 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:30.251031 | orchestrator | 2026-03-07 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:33.289753 | orchestrator | 2026-03-07 00:52:33 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:33.292614 | orchestrator | 2026-03-07 00:52:33 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:33.295880 | orchestrator | 2026-03-07 00:52:33 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:33.295942 | orchestrator | 2026-03-07 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:36.335816 | orchestrator | 2026-03-07 00:52:36 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:36.338238 | orchestrator | 2026-03-07 00:52:36 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:36.341069 | orchestrator | 2026-03-07 00:52:36 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:36.341138 | orchestrator | 2026-03-07 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:39.376864 | orchestrator | 2026-03-07 00:52:39 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:39.379956 | orchestrator | 2026-03-07 00:52:39 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:39.381781 | orchestrator | 2026-03-07 00:52:39 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:39.381833 | orchestrator | 2026-03-07 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:42.421582 | orchestrator | 2026-03-07 00:52:42 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:42.425754 | orchestrator | 2026-03-07 00:52:42 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:42.429207 | orchestrator | 2026-03-07 00:52:42 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:42.429279 | orchestrator | 2026-03-07 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:45.481670 | orchestrator | 2026-03-07 00:52:45 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:45.482703 | orchestrator | 2026-03-07 00:52:45 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:45.483838 | orchestrator | 2026-03-07 00:52:45 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:45.483908 | orchestrator | 2026-03-07 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:48.525853 | orchestrator | 2026-03-07 00:52:48 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:48.529066 | orchestrator | 2026-03-07 00:52:48 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:48.529152 | orchestrator | 2026-03-07 00:52:48 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:48.529388 | orchestrator | 2026-03-07 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:51.574571 | orchestrator | 2026-03-07 00:52:51 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:51.575284 | orchestrator | 2026-03-07 00:52:51 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:51.576644 | orchestrator | 2026-03-07 00:52:51 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:51.576690 | orchestrator | 2026-03-07 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:54.620096 | orchestrator | 2026-03-07 00:52:54 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:54.622690 | orchestrator | 2026-03-07 00:52:54 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:54.625237 | orchestrator | 2026-03-07 00:52:54 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:54.625355 | orchestrator | 2026-03-07 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:57.662680 | orchestrator | 2026-03-07 00:52:57 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:52:57.663490 | orchestrator | 2026-03-07 00:52:57 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:52:57.664729 | orchestrator | 2026-03-07 00:52:57 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:52:57.665338 | orchestrator | 2026-03-07 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:00.710750 | orchestrator | 2026-03-07 00:53:00 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:00.711981 | orchestrator | 2026-03-07 00:53:00 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:53:00.713223 | orchestrator | 2026-03-07 00:53:00 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:00.713265 | orchestrator | 2026-03-07 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:03.757716 | orchestrator | 2026-03-07 00:53:03 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:03.759783 | orchestrator | 2026-03-07 00:53:03 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:53:03.762397 | orchestrator | 2026-03-07 00:53:03 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:03.762444 | orchestrator | 2026-03-07 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:06.809479 | orchestrator | 2026-03-07 00:53:06 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:06.809956 | orchestrator | 2026-03-07 00:53:06 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:53:06.811572 | orchestrator | 2026-03-07 00:53:06 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:06.811640 | orchestrator | 2026-03-07 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:09.867237 | orchestrator | 2026-03-07 00:53:09 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:09.867916 | orchestrator | 2026-03-07 00:53:09 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state STARTED 2026-03-07 00:53:09.869549 | orchestrator | 2026-03-07 00:53:09 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:09.869578 | orchestrator | 2026-03-07 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:12.932415 | orchestrator | 2026-03-07 00:53:12 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:12.937583 | orchestrator | 2026-03-07 00:53:12 | INFO  | Task 694d3cb5-40e2-40ee-9873-08f7a683a628 is in state SUCCESS 2026-03-07 00:53:12.939404 | orchestrator | 2026-03-07 00:53:12.939445 | orchestrator | 2026-03-07 00:53:12.939451 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:53:12.939457 | orchestrator | 2026-03-07 00:53:12.939461 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:53:12.939466 | orchestrator | Saturday 07 March 2026 00:50:30 +0000 (0:00:00.196) 0:00:00.196 ******** 2026-03-07 00:53:12.939471 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:53:12.939488 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:53:12.939493 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:53:12.939497 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.939501 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.939505 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.939509 | orchestrator | 2026-03-07 00:53:12.939513 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:53:12.939555 | orchestrator | Saturday 07 March 2026 00:50:31 +0000 (0:00:00.896) 0:00:01.093 ******** 2026-03-07 00:53:12.939562 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-07 00:53:12.939570 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-07 00:53:12.939578 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-07 00:53:12.939584 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-07 00:53:12.939591 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-07 00:53:12.939598 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-07 00:53:12.939603 | orchestrator | 2026-03-07 00:53:12.939607 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-07 00:53:12.939630 | orchestrator | 2026-03-07 00:53:12.939636 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-07 00:53:12.939642 | orchestrator | Saturday 07 March 2026 00:50:32 +0000 (0:00:01.245) 0:00:02.338 ******** 2026-03-07 00:53:12.939650 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:53:12.939657 | orchestrator | 2026-03-07 00:53:12.939662 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-07 00:53:12.939668 | orchestrator | Saturday 07 March 2026 00:50:34 +0000 (0:00:02.302) 0:00:04.641 ******** 2026-03-07 00:53:12.939676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939716 | orchestrator | 2026-03-07 00:53:12.939734 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-07 00:53:12.939741 | orchestrator | Saturday 07 March 2026 00:50:36 +0000 (0:00:02.229) 0:00:06.870 ******** 2026-03-07 00:53:12.939753 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939774 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939799 | orchestrator | 2026-03-07 00:53:12.939805 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-07 00:53:12.939812 | orchestrator | Saturday 07 March 2026 00:50:38 +0000 (0:00:01.964) 0:00:08.835 ******** 2026-03-07 00:53:12.939857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939897 | orchestrator | 2026-03-07 00:53:12.939901 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-07 00:53:12.939905 | orchestrator | Saturday 07 March 2026 00:50:40 +0000 (0:00:01.451) 0:00:10.286 ******** 2026-03-07 00:53:12.939909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939937 | orchestrator | 2026-03-07 00:53:12.939944 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-07 00:53:12.939948 | orchestrator | Saturday 07 March 2026 00:50:42 +0000 (0:00:02.170) 0:00:12.457 ******** 2026-03-07 00:53:12.939956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.939983 | orchestrator | 2026-03-07 00:53:12.939988 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-07 00:53:12.939992 | orchestrator | Saturday 07 March 2026 00:50:44 +0000 (0:00:02.124) 0:00:14.581 ******** 2026-03-07 00:53:12.939997 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:53:12.940001 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:12.940005 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:12.940010 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:53:12.940014 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:53:12.940071 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:12.940076 | orchestrator | 2026-03-07 00:53:12.940080 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-07 00:53:12.940085 | orchestrator | Saturday 07 March 2026 00:50:47 +0000 (0:00:03.088) 0:00:17.670 ******** 2026-03-07 00:53:12.940090 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-07 00:53:12.940099 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-07 00:53:12.940103 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-07 00:53:12.940108 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:12.940112 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-07 00:53:12.940116 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:12.940121 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:12.940127 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-07 00:53:12.940171 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:12.940191 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-07 00:53:12.940198 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:12.940208 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:12.940264 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:12.940272 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:12.940280 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:12.940286 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:12.940293 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:12.940299 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:12.940305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:12.940311 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:12.940317 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:12.940323 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:12.940329 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:12.940336 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:12.940342 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:12.940348 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:12.940355 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:12.940362 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:12.940369 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:12.940375 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:12.940387 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-07 00:53:12.940394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:12.940400 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:12.940406 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:12.940412 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-07 00:53:12.940419 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:12.940426 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-07 00:53:12.940432 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:12.940438 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-07 00:53:12.940444 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:12.940450 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-07 00:53:12.940456 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-07 00:53:12.940462 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-07 00:53:12.940467 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-07 00:53:12.940479 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-07 00:53:12.940486 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-07 00:53:12.940492 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-07 00:53:12.940503 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-07 00:53:12.940510 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-07 00:53:12.940536 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-07 00:53:12.940542 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-07 00:53:12.940548 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-07 00:53:12.940554 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-07 00:53:12.940560 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-07 00:53:12.940565 | orchestrator | 2026-03-07 00:53:12.940572 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:12.940578 | orchestrator | Saturday 07 March 2026 00:51:12 +0000 (0:00:25.053) 0:00:42.724 ******** 2026-03-07 00:53:12.940584 | orchestrator | 2026-03-07 00:53:12.940591 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:12.940596 | orchestrator | Saturday 07 March 2026 00:51:12 +0000 (0:00:00.084) 0:00:42.808 ******** 2026-03-07 00:53:12.940607 | orchestrator | 2026-03-07 00:53:12.940614 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:12.940619 | orchestrator | Saturday 07 March 2026 00:51:12 +0000 (0:00:00.102) 0:00:42.910 ******** 2026-03-07 00:53:12.940625 | orchestrator | 2026-03-07 00:53:12.940631 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:12.940637 | orchestrator | Saturday 07 March 2026 00:51:13 +0000 (0:00:00.101) 0:00:43.012 ******** 2026-03-07 00:53:12.940643 | orchestrator | 2026-03-07 00:53:12.940649 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:12.940655 | orchestrator | Saturday 07 March 2026 00:51:13 +0000 (0:00:00.128) 0:00:43.141 ******** 2026-03-07 00:53:12.940661 | orchestrator | 2026-03-07 00:53:12.940666 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:12.940672 | orchestrator | Saturday 07 March 2026 00:51:13 +0000 (0:00:00.091) 0:00:43.232 ******** 2026-03-07 00:53:12.940678 | orchestrator | 2026-03-07 00:53:12.940685 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-07 00:53:12.940691 | orchestrator | Saturday 07 March 2026 00:51:13 +0000 (0:00:00.089) 0:00:43.322 ******** 2026-03-07 00:53:12.940697 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:53:12.940704 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:53:12.940710 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:53:12.940717 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.940724 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.940731 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.940738 | orchestrator | 2026-03-07 00:53:12.940743 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-07 00:53:12.940750 | orchestrator | Saturday 07 March 2026 00:51:16 +0000 (0:00:02.913) 0:00:46.235 ******** 2026-03-07 00:53:12.940757 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:12.940764 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:53:12.940769 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:53:12.940775 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:12.940780 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:12.940786 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:53:12.940792 | orchestrator | 2026-03-07 00:53:12.940798 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-07 00:53:12.940805 | orchestrator | 2026-03-07 00:53:12.940812 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-07 00:53:12.940818 | orchestrator | Saturday 07 March 2026 00:51:47 +0000 (0:00:30.905) 0:01:17.141 ******** 2026-03-07 00:53:12.940825 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:53:12.940832 | orchestrator | 2026-03-07 00:53:12.940838 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-07 00:53:12.940845 | orchestrator | Saturday 07 March 2026 00:51:48 +0000 (0:00:01.541) 0:01:18.683 ******** 2026-03-07 00:53:12.940851 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:53:12.940857 | orchestrator | 2026-03-07 00:53:12.940864 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-07 00:53:12.940870 | orchestrator | Saturday 07 March 2026 00:51:49 +0000 (0:00:00.755) 0:01:19.438 ******** 2026-03-07 00:53:12.940876 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.940882 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.940888 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.940894 | orchestrator | 2026-03-07 00:53:12.940901 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-07 00:53:12.940908 | orchestrator | Saturday 07 March 2026 00:51:50 +0000 (0:00:01.190) 0:01:20.629 ******** 2026-03-07 00:53:12.940914 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.940921 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.940928 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.940941 | orchestrator | 2026-03-07 00:53:12.940953 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-07 00:53:12.940959 | orchestrator | Saturday 07 March 2026 00:51:51 +0000 (0:00:00.384) 0:01:21.014 ******** 2026-03-07 00:53:12.940966 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.940972 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.940979 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.940986 | orchestrator | 2026-03-07 00:53:12.941003 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-07 00:53:12.941009 | orchestrator | Saturday 07 March 2026 00:51:51 +0000 (0:00:00.377) 0:01:21.391 ******** 2026-03-07 00:53:12.941015 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.941021 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.941028 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.941034 | orchestrator | 2026-03-07 00:53:12.941040 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-07 00:53:12.941085 | orchestrator | Saturday 07 March 2026 00:51:51 +0000 (0:00:00.440) 0:01:21.832 ******** 2026-03-07 00:53:12.941093 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.941100 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.941107 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.941114 | orchestrator | 2026-03-07 00:53:12.941120 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-07 00:53:12.941127 | orchestrator | Saturday 07 March 2026 00:51:52 +0000 (0:00:00.836) 0:01:22.669 ******** 2026-03-07 00:53:12.941134 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941141 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941147 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941154 | orchestrator | 2026-03-07 00:53:12.941161 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-07 00:53:12.941168 | orchestrator | Saturday 07 March 2026 00:51:53 +0000 (0:00:00.334) 0:01:23.004 ******** 2026-03-07 00:53:12.941175 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941181 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941188 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941193 | orchestrator | 2026-03-07 00:53:12.941199 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-07 00:53:12.941205 | orchestrator | Saturday 07 March 2026 00:51:53 +0000 (0:00:00.320) 0:01:23.325 ******** 2026-03-07 00:53:12.941211 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941217 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941223 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941228 | orchestrator | 2026-03-07 00:53:12.941235 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-07 00:53:12.941241 | orchestrator | Saturday 07 March 2026 00:51:53 +0000 (0:00:00.329) 0:01:23.655 ******** 2026-03-07 00:53:12.941246 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941298 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941304 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941310 | orchestrator | 2026-03-07 00:53:12.941316 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-07 00:53:12.941323 | orchestrator | Saturday 07 March 2026 00:51:54 +0000 (0:00:00.558) 0:01:24.213 ******** 2026-03-07 00:53:12.941330 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941337 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941343 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941350 | orchestrator | 2026-03-07 00:53:12.941356 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-07 00:53:12.941363 | orchestrator | Saturday 07 March 2026 00:51:54 +0000 (0:00:00.375) 0:01:24.589 ******** 2026-03-07 00:53:12.941369 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941375 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941381 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941386 | orchestrator | 2026-03-07 00:53:12.941392 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-07 00:53:12.941407 | orchestrator | Saturday 07 March 2026 00:51:55 +0000 (0:00:00.368) 0:01:24.957 ******** 2026-03-07 00:53:12.941414 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941421 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941428 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941435 | orchestrator | 2026-03-07 00:53:12.941441 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-07 00:53:12.941448 | orchestrator | Saturday 07 March 2026 00:51:55 +0000 (0:00:00.331) 0:01:25.288 ******** 2026-03-07 00:53:12.941455 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941462 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941469 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941475 | orchestrator | 2026-03-07 00:53:12.941481 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-07 00:53:12.941488 | orchestrator | Saturday 07 March 2026 00:51:55 +0000 (0:00:00.581) 0:01:25.870 ******** 2026-03-07 00:53:12.941494 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941500 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941505 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941510 | orchestrator | 2026-03-07 00:53:12.941534 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-07 00:53:12.941541 | orchestrator | Saturday 07 March 2026 00:51:56 +0000 (0:00:00.387) 0:01:26.258 ******** 2026-03-07 00:53:12.941547 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941554 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941560 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941566 | orchestrator | 2026-03-07 00:53:12.941573 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-07 00:53:12.941580 | orchestrator | Saturday 07 March 2026 00:51:56 +0000 (0:00:00.371) 0:01:26.630 ******** 2026-03-07 00:53:12.941587 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941593 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941600 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941606 | orchestrator | 2026-03-07 00:53:12.941612 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-07 00:53:12.941618 | orchestrator | Saturday 07 March 2026 00:51:57 +0000 (0:00:00.398) 0:01:27.028 ******** 2026-03-07 00:53:12.941625 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941631 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941646 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941653 | orchestrator | 2026-03-07 00:53:12.941659 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-07 00:53:12.941666 | orchestrator | Saturday 07 March 2026 00:51:57 +0000 (0:00:00.549) 0:01:27.577 ******** 2026-03-07 00:53:12.941680 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:53:12.941687 | orchestrator | 2026-03-07 00:53:12.941694 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-07 00:53:12.941700 | orchestrator | Saturday 07 March 2026 00:51:58 +0000 (0:00:01.164) 0:01:28.742 ******** 2026-03-07 00:53:12.941706 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.941713 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.941720 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.941726 | orchestrator | 2026-03-07 00:53:12.941732 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-07 00:53:12.941738 | orchestrator | Saturday 07 March 2026 00:51:59 +0000 (0:00:00.528) 0:01:29.271 ******** 2026-03-07 00:53:12.941745 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.941751 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.941757 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.941763 | orchestrator | 2026-03-07 00:53:12.941770 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-07 00:53:12.941777 | orchestrator | Saturday 07 March 2026 00:51:59 +0000 (0:00:00.542) 0:01:29.813 ******** 2026-03-07 00:53:12.941789 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941796 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941802 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941808 | orchestrator | 2026-03-07 00:53:12.941815 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-07 00:53:12.941821 | orchestrator | Saturday 07 March 2026 00:52:00 +0000 (0:00:00.606) 0:01:30.420 ******** 2026-03-07 00:53:12.941828 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941834 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941840 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941846 | orchestrator | 2026-03-07 00:53:12.941853 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-07 00:53:12.941859 | orchestrator | Saturday 07 March 2026 00:52:00 +0000 (0:00:00.377) 0:01:30.797 ******** 2026-03-07 00:53:12.941865 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941871 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941877 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941883 | orchestrator | 2026-03-07 00:53:12.941889 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-07 00:53:12.941896 | orchestrator | Saturday 07 March 2026 00:52:01 +0000 (0:00:00.385) 0:01:31.183 ******** 2026-03-07 00:53:12.941902 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941909 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941919 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941929 | orchestrator | 2026-03-07 00:53:12.941935 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-07 00:53:12.941942 | orchestrator | Saturday 07 March 2026 00:52:01 +0000 (0:00:00.362) 0:01:31.546 ******** 2026-03-07 00:53:12.941948 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.941955 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.941965 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.941972 | orchestrator | 2026-03-07 00:53:12.941979 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-07 00:53:12.942301 | orchestrator | Saturday 07 March 2026 00:52:02 +0000 (0:00:00.679) 0:01:32.226 ******** 2026-03-07 00:53:12.942319 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.942326 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.942333 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.942340 | orchestrator | 2026-03-07 00:53:12.942346 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-07 00:53:12.942353 | orchestrator | Saturday 07 March 2026 00:52:02 +0000 (0:00:00.406) 0:01:32.632 ******** 2026-03-07 00:53:12.942362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942450 | orchestrator | 2026-03-07 00:53:12.942457 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-07 00:53:12.942464 | orchestrator | Saturday 07 March 2026 00:52:04 +0000 (0:00:01.650) 0:01:34.283 ******** 2026-03-07 00:53:12.942471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942590 | orchestrator | 2026-03-07 00:53:12.942596 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-07 00:53:12.942603 | orchestrator | Saturday 07 March 2026 00:52:08 +0000 (0:00:04.445) 0:01:38.728 ******** 2026-03-07 00:53:12.942610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.942675 | orchestrator | 2026-03-07 00:53:12.942681 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:12.942686 | orchestrator | Saturday 07 March 2026 00:52:11 +0000 (0:00:02.654) 0:01:41.383 ******** 2026-03-07 00:53:12.942692 | orchestrator | 2026-03-07 00:53:12.942698 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:12.942705 | orchestrator | Saturday 07 March 2026 00:52:11 +0000 (0:00:00.070) 0:01:41.453 ******** 2026-03-07 00:53:12.942710 | orchestrator | 2026-03-07 00:53:12.942716 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:12.942721 | orchestrator | Saturday 07 March 2026 00:52:11 +0000 (0:00:00.069) 0:01:41.523 ******** 2026-03-07 00:53:12.942727 | orchestrator | 2026-03-07 00:53:12.942733 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-07 00:53:12.942740 | orchestrator | Saturday 07 March 2026 00:52:11 +0000 (0:00:00.072) 0:01:41.595 ******** 2026-03-07 00:53:12.942746 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:12.942752 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:12.942758 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:12.942764 | orchestrator | 2026-03-07 00:53:12.942770 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-07 00:53:12.942776 | orchestrator | Saturday 07 March 2026 00:52:19 +0000 (0:00:07.556) 0:01:49.152 ******** 2026-03-07 00:53:12.942782 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:12.942788 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:12.942794 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:12.942800 | orchestrator | 2026-03-07 00:53:12.942807 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-07 00:53:12.942818 | orchestrator | Saturday 07 March 2026 00:52:26 +0000 (0:00:07.659) 0:01:56.811 ******** 2026-03-07 00:53:12.942824 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:12.942830 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:12.942836 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:12.942842 | orchestrator | 2026-03-07 00:53:12.942848 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-07 00:53:12.942854 | orchestrator | Saturday 07 March 2026 00:52:31 +0000 (0:00:04.471) 0:02:01.282 ******** 2026-03-07 00:53:12.942861 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.942867 | orchestrator | 2026-03-07 00:53:12.942873 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-07 00:53:12.942879 | orchestrator | Saturday 07 March 2026 00:52:31 +0000 (0:00:00.132) 0:02:01.415 ******** 2026-03-07 00:53:12.942886 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.942893 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.942899 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.942904 | orchestrator | 2026-03-07 00:53:12.942911 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-07 00:53:12.942917 | orchestrator | Saturday 07 March 2026 00:52:32 +0000 (0:00:01.138) 0:02:02.553 ******** 2026-03-07 00:53:12.942924 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.942930 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.942937 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:12.942944 | orchestrator | 2026-03-07 00:53:12.942949 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-07 00:53:12.942956 | orchestrator | Saturday 07 March 2026 00:52:33 +0000 (0:00:00.677) 0:02:03.231 ******** 2026-03-07 00:53:12.942962 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.942969 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.942975 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.942981 | orchestrator | 2026-03-07 00:53:12.942987 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-07 00:53:12.942992 | orchestrator | Saturday 07 March 2026 00:52:34 +0000 (0:00:00.890) 0:02:04.121 ******** 2026-03-07 00:53:12.942999 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.943005 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.943012 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:12.943018 | orchestrator | 2026-03-07 00:53:12.943023 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-07 00:53:12.943030 | orchestrator | Saturday 07 March 2026 00:52:35 +0000 (0:00:00.890) 0:02:05.012 ******** 2026-03-07 00:53:12.943039 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.943045 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.943055 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.943061 | orchestrator | 2026-03-07 00:53:12.943067 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-07 00:53:12.943072 | orchestrator | Saturday 07 March 2026 00:52:35 +0000 (0:00:00.890) 0:02:05.902 ******** 2026-03-07 00:53:12.943078 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.943084 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.943090 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.943097 | orchestrator | 2026-03-07 00:53:12.943108 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-07 00:53:12.943113 | orchestrator | Saturday 07 March 2026 00:52:36 +0000 (0:00:00.823) 0:02:06.725 ******** 2026-03-07 00:53:12.943119 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.943125 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.943131 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.943138 | orchestrator | 2026-03-07 00:53:12.943144 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-07 00:53:12.943150 | orchestrator | Saturday 07 March 2026 00:52:37 +0000 (0:00:00.437) 0:02:07.163 ******** 2026-03-07 00:53:12.943156 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943166 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943170 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943174 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943179 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943183 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943188 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943194 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943204 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943210 | orchestrator | 2026-03-07 00:53:12.943216 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-07 00:53:12.943222 | orchestrator | Saturday 07 March 2026 00:52:38 +0000 (0:00:01.495) 0:02:08.659 ******** 2026-03-07 00:53:12.943232 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943243 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943250 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943264 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943283 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943295 | orchestrator | 2026-03-07 00:53:12.943299 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-07 00:53:12.943303 | orchestrator | Saturday 07 March 2026 00:52:43 +0000 (0:00:04.379) 0:02:13.039 ******** 2026-03-07 00:53:12.943311 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943323 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943327 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943339 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943350 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:12.943354 | orchestrator | 2026-03-07 00:53:12.943358 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:12.943362 | orchestrator | Saturday 07 March 2026 00:52:46 +0000 (0:00:02.893) 0:02:15.932 ******** 2026-03-07 00:53:12.943365 | orchestrator | 2026-03-07 00:53:12.943369 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:12.943373 | orchestrator | Saturday 07 March 2026 00:52:46 +0000 (0:00:00.069) 0:02:16.001 ******** 2026-03-07 00:53:12.943376 | orchestrator | 2026-03-07 00:53:12.943380 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:12.943384 | orchestrator | Saturday 07 March 2026 00:52:46 +0000 (0:00:00.069) 0:02:16.071 ******** 2026-03-07 00:53:12.943391 | orchestrator | 2026-03-07 00:53:12.943394 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-07 00:53:12.943398 | orchestrator | Saturday 07 March 2026 00:52:46 +0000 (0:00:00.077) 0:02:16.148 ******** 2026-03-07 00:53:12.943402 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:12.943406 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:12.943409 | orchestrator | 2026-03-07 00:53:12.943415 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-07 00:53:12.943419 | orchestrator | Saturday 07 March 2026 00:52:52 +0000 (0:00:06.470) 0:02:22.619 ******** 2026-03-07 00:53:12.943423 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:12.943426 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:12.943430 | orchestrator | 2026-03-07 00:53:12.943437 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-07 00:53:12.943441 | orchestrator | Saturday 07 March 2026 00:52:59 +0000 (0:00:06.513) 0:02:29.133 ******** 2026-03-07 00:53:12.943445 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:12.943448 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:12.943452 | orchestrator | 2026-03-07 00:53:12.943456 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-07 00:53:12.943459 | orchestrator | Saturday 07 March 2026 00:53:05 +0000 (0:00:06.549) 0:02:35.682 ******** 2026-03-07 00:53:12.943463 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:12.943467 | orchestrator | 2026-03-07 00:53:12.943470 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-07 00:53:12.943474 | orchestrator | Saturday 07 March 2026 00:53:05 +0000 (0:00:00.166) 0:02:35.849 ******** 2026-03-07 00:53:12.943478 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.943481 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.943485 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.943489 | orchestrator | 2026-03-07 00:53:12.943492 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-07 00:53:12.943496 | orchestrator | Saturday 07 March 2026 00:53:06 +0000 (0:00:00.783) 0:02:36.632 ******** 2026-03-07 00:53:12.943500 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.943504 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.943507 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:12.943511 | orchestrator | 2026-03-07 00:53:12.943515 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-07 00:53:12.943555 | orchestrator | Saturday 07 March 2026 00:53:07 +0000 (0:00:00.665) 0:02:37.297 ******** 2026-03-07 00:53:12.943559 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.943562 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.943566 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.943570 | orchestrator | 2026-03-07 00:53:12.943573 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-07 00:53:12.943577 | orchestrator | Saturday 07 March 2026 00:53:08 +0000 (0:00:00.850) 0:02:38.148 ******** 2026-03-07 00:53:12.943581 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:12.943585 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:12.943588 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:12.943592 | orchestrator | 2026-03-07 00:53:12.943596 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-07 00:53:12.943599 | orchestrator | Saturday 07 March 2026 00:53:08 +0000 (0:00:00.708) 0:02:38.857 ******** 2026-03-07 00:53:12.943603 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.943607 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.943611 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.943614 | orchestrator | 2026-03-07 00:53:12.943618 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-07 00:53:12.943622 | orchestrator | Saturday 07 March 2026 00:53:09 +0000 (0:00:01.053) 0:02:39.911 ******** 2026-03-07 00:53:12.943625 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:12.943629 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:12.943636 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:12.943640 | orchestrator | 2026-03-07 00:53:12.943644 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:53:12.943648 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-07 00:53:12.943652 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-07 00:53:12.943656 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-07 00:53:12.943660 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:53:12.943664 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:53:12.943668 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:53:12.943672 | orchestrator | 2026-03-07 00:53:12.943676 | orchestrator | 2026-03-07 00:53:12.943680 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:53:12.943683 | orchestrator | Saturday 07 March 2026 00:53:10 +0000 (0:00:00.985) 0:02:40.896 ******** 2026-03-07 00:53:12.943687 | orchestrator | =============================================================================== 2026-03-07 00:53:12.943691 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 30.91s 2026-03-07 00:53:12.943695 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 25.05s 2026-03-07 00:53:12.943698 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.17s 2026-03-07 00:53:12.943702 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.03s 2026-03-07 00:53:12.943706 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 11.02s 2026-03-07 00:53:12.943710 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.45s 2026-03-07 00:53:12.943713 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.38s 2026-03-07 00:53:12.943719 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.09s 2026-03-07 00:53:12.943723 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.91s 2026-03-07 00:53:12.943727 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.89s 2026-03-07 00:53:12.943730 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.65s 2026-03-07 00:53:12.943756 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.30s 2026-03-07 00:53:12.943760 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.23s 2026-03-07 00:53:12.943764 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.17s 2026-03-07 00:53:12.943768 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.13s 2026-03-07 00:53:12.943772 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.96s 2026-03-07 00:53:12.943776 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.65s 2026-03-07 00:53:12.943779 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.54s 2026-03-07 00:53:12.943783 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.50s 2026-03-07 00:53:12.943787 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.45s 2026-03-07 00:53:12.943790 | orchestrator | 2026-03-07 00:53:12 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:12.943794 | orchestrator | 2026-03-07 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:15.990704 | orchestrator | 2026-03-07 00:53:15 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:15.993393 | orchestrator | 2026-03-07 00:53:15 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:15.993826 | orchestrator | 2026-03-07 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:19.048237 | orchestrator | 2026-03-07 00:53:19 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:19.048830 | orchestrator | 2026-03-07 00:53:19 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:19.049080 | orchestrator | 2026-03-07 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:22.087009 | orchestrator | 2026-03-07 00:53:22 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:22.087701 | orchestrator | 2026-03-07 00:53:22 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:22.087731 | orchestrator | 2026-03-07 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:25.123776 | orchestrator | 2026-03-07 00:53:25 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:25.124064 | orchestrator | 2026-03-07 00:53:25 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:25.124097 | orchestrator | 2026-03-07 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:28.177417 | orchestrator | 2026-03-07 00:53:28 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:28.179208 | orchestrator | 2026-03-07 00:53:28 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:28.179279 | orchestrator | 2026-03-07 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:31.225180 | orchestrator | 2026-03-07 00:53:31 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:31.228492 | orchestrator | 2026-03-07 00:53:31 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:31.228586 | orchestrator | 2026-03-07 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:34.280717 | orchestrator | 2026-03-07 00:53:34 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:34.283011 | orchestrator | 2026-03-07 00:53:34 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:34.283065 | orchestrator | 2026-03-07 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:37.327291 | orchestrator | 2026-03-07 00:53:37 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:37.329406 | orchestrator | 2026-03-07 00:53:37 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:37.329449 | orchestrator | 2026-03-07 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:40.388082 | orchestrator | 2026-03-07 00:53:40 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:40.389687 | orchestrator | 2026-03-07 00:53:40 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:40.389737 | orchestrator | 2026-03-07 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:43.431892 | orchestrator | 2026-03-07 00:53:43 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:43.433062 | orchestrator | 2026-03-07 00:53:43 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:43.433120 | orchestrator | 2026-03-07 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:46.488066 | orchestrator | 2026-03-07 00:53:46 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:46.490813 | orchestrator | 2026-03-07 00:53:46 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:46.490954 | orchestrator | 2026-03-07 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:49.530379 | orchestrator | 2026-03-07 00:53:49 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:49.532728 | orchestrator | 2026-03-07 00:53:49 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:49.532795 | orchestrator | 2026-03-07 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:52.576259 | orchestrator | 2026-03-07 00:53:52 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:52.578977 | orchestrator | 2026-03-07 00:53:52 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:52.579037 | orchestrator | 2026-03-07 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:55.627209 | orchestrator | 2026-03-07 00:53:55 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:55.628816 | orchestrator | 2026-03-07 00:53:55 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:55.628848 | orchestrator | 2026-03-07 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:58.686337 | orchestrator | 2026-03-07 00:53:58 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:53:58.688077 | orchestrator | 2026-03-07 00:53:58 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:53:58.688500 | orchestrator | 2026-03-07 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:01.732378 | orchestrator | 2026-03-07 00:54:01 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:01.735188 | orchestrator | 2026-03-07 00:54:01 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:01.735265 | orchestrator | 2026-03-07 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:04.784361 | orchestrator | 2026-03-07 00:54:04 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:04.785857 | orchestrator | 2026-03-07 00:54:04 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:04.785910 | orchestrator | 2026-03-07 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:07.843725 | orchestrator | 2026-03-07 00:54:07 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:07.846726 | orchestrator | 2026-03-07 00:54:07 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:07.847311 | orchestrator | 2026-03-07 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:10.896802 | orchestrator | 2026-03-07 00:54:10 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:10.900588 | orchestrator | 2026-03-07 00:54:10 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:10.900693 | orchestrator | 2026-03-07 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:13.946739 | orchestrator | 2026-03-07 00:54:13 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:13.948673 | orchestrator | 2026-03-07 00:54:13 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:13.948795 | orchestrator | 2026-03-07 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:16.997030 | orchestrator | 2026-03-07 00:54:16 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:16.998356 | orchestrator | 2026-03-07 00:54:16 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:16.998386 | orchestrator | 2026-03-07 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:20.043287 | orchestrator | 2026-03-07 00:54:20 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:20.045711 | orchestrator | 2026-03-07 00:54:20 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:20.045762 | orchestrator | 2026-03-07 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:23.133457 | orchestrator | 2026-03-07 00:54:23 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:23.133555 | orchestrator | 2026-03-07 00:54:23 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:23.133563 | orchestrator | 2026-03-07 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:26.177363 | orchestrator | 2026-03-07 00:54:26 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:26.181300 | orchestrator | 2026-03-07 00:54:26 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:26.181406 | orchestrator | 2026-03-07 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:29.219349 | orchestrator | 2026-03-07 00:54:29 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:29.221742 | orchestrator | 2026-03-07 00:54:29 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:29.221817 | orchestrator | 2026-03-07 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:32.264602 | orchestrator | 2026-03-07 00:54:32 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:32.265054 | orchestrator | 2026-03-07 00:54:32 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:32.265228 | orchestrator | 2026-03-07 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:35.297660 | orchestrator | 2026-03-07 00:54:35 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:35.298230 | orchestrator | 2026-03-07 00:54:35 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:35.298276 | orchestrator | 2026-03-07 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:38.345232 | orchestrator | 2026-03-07 00:54:38 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:38.346214 | orchestrator | 2026-03-07 00:54:38 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:38.346290 | orchestrator | 2026-03-07 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:41.398934 | orchestrator | 2026-03-07 00:54:41 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:41.401770 | orchestrator | 2026-03-07 00:54:41 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:41.401826 | orchestrator | 2026-03-07 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:44.446476 | orchestrator | 2026-03-07 00:54:44 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:44.446623 | orchestrator | 2026-03-07 00:54:44 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:44.446681 | orchestrator | 2026-03-07 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:47.491521 | orchestrator | 2026-03-07 00:54:47 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:47.492769 | orchestrator | 2026-03-07 00:54:47 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:47.492802 | orchestrator | 2026-03-07 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:50.539702 | orchestrator | 2026-03-07 00:54:50 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:50.540355 | orchestrator | 2026-03-07 00:54:50 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:50.540443 | orchestrator | 2026-03-07 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:53.578387 | orchestrator | 2026-03-07 00:54:53 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:53.579906 | orchestrator | 2026-03-07 00:54:53 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:53.579963 | orchestrator | 2026-03-07 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:56.627606 | orchestrator | 2026-03-07 00:54:56 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:56.629418 | orchestrator | 2026-03-07 00:54:56 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:56.629495 | orchestrator | 2026-03-07 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:59.676490 | orchestrator | 2026-03-07 00:54:59 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:54:59.677646 | orchestrator | 2026-03-07 00:54:59 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:54:59.677733 | orchestrator | 2026-03-07 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:02.728303 | orchestrator | 2026-03-07 00:55:02 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:02.731214 | orchestrator | 2026-03-07 00:55:02 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:02.731322 | orchestrator | 2026-03-07 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:05.767505 | orchestrator | 2026-03-07 00:55:05 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:05.768029 | orchestrator | 2026-03-07 00:55:05 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:05.768067 | orchestrator | 2026-03-07 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:08.819742 | orchestrator | 2026-03-07 00:55:08 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:08.819963 | orchestrator | 2026-03-07 00:55:08 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:08.819989 | orchestrator | 2026-03-07 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:11.871791 | orchestrator | 2026-03-07 00:55:11 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:11.875424 | orchestrator | 2026-03-07 00:55:11 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:11.876517 | orchestrator | 2026-03-07 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:14.922357 | orchestrator | 2026-03-07 00:55:14 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:14.924414 | orchestrator | 2026-03-07 00:55:14 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:14.924465 | orchestrator | 2026-03-07 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:17.977349 | orchestrator | 2026-03-07 00:55:17 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:17.981196 | orchestrator | 2026-03-07 00:55:17 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:17.981276 | orchestrator | 2026-03-07 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:21.030123 | orchestrator | 2026-03-07 00:55:21 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:21.031091 | orchestrator | 2026-03-07 00:55:21 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:21.031150 | orchestrator | 2026-03-07 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:24.073033 | orchestrator | 2026-03-07 00:55:24 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:24.073489 | orchestrator | 2026-03-07 00:55:24 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:24.073533 | orchestrator | 2026-03-07 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:27.114450 | orchestrator | 2026-03-07 00:55:27 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:27.114655 | orchestrator | 2026-03-07 00:55:27 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:27.114736 | orchestrator | 2026-03-07 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:30.164632 | orchestrator | 2026-03-07 00:55:30 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:30.165615 | orchestrator | 2026-03-07 00:55:30 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:30.165670 | orchestrator | 2026-03-07 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:33.209970 | orchestrator | 2026-03-07 00:55:33 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:33.210646 | orchestrator | 2026-03-07 00:55:33 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:33.210729 | orchestrator | 2026-03-07 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:36.257578 | orchestrator | 2026-03-07 00:55:36 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:36.258999 | orchestrator | 2026-03-07 00:55:36 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:36.259052 | orchestrator | 2026-03-07 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:39.318577 | orchestrator | 2026-03-07 00:55:39 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:39.323970 | orchestrator | 2026-03-07 00:55:39 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:39.324875 | orchestrator | 2026-03-07 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:42.358647 | orchestrator | 2026-03-07 00:55:42 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:42.360656 | orchestrator | 2026-03-07 00:55:42 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:42.360735 | orchestrator | 2026-03-07 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:45.403823 | orchestrator | 2026-03-07 00:55:45 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:45.404424 | orchestrator | 2026-03-07 00:55:45 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:45.404646 | orchestrator | 2026-03-07 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:48.442267 | orchestrator | 2026-03-07 00:55:48 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:48.443038 | orchestrator | 2026-03-07 00:55:48 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:48.443102 | orchestrator | 2026-03-07 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:51.511390 | orchestrator | 2026-03-07 00:55:51 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:51.512331 | orchestrator | 2026-03-07 00:55:51 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:51.512374 | orchestrator | 2026-03-07 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:54.553877 | orchestrator | 2026-03-07 00:55:54 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:54.555125 | orchestrator | 2026-03-07 00:55:54 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:54.555212 | orchestrator | 2026-03-07 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:57.600085 | orchestrator | 2026-03-07 00:55:57 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:55:57.601603 | orchestrator | 2026-03-07 00:55:57 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:55:57.601650 | orchestrator | 2026-03-07 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:00.650581 | orchestrator | 2026-03-07 00:56:00 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:00.650732 | orchestrator | 2026-03-07 00:56:00 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:00.650752 | orchestrator | 2026-03-07 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:03.687688 | orchestrator | 2026-03-07 00:56:03 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:03.689666 | orchestrator | 2026-03-07 00:56:03 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:03.689992 | orchestrator | 2026-03-07 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:06.754546 | orchestrator | 2026-03-07 00:56:06 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:06.755864 | orchestrator | 2026-03-07 00:56:06 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:06.755906 | orchestrator | 2026-03-07 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:09.802971 | orchestrator | 2026-03-07 00:56:09 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:09.806561 | orchestrator | 2026-03-07 00:56:09 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:09.807309 | orchestrator | 2026-03-07 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:12.866357 | orchestrator | 2026-03-07 00:56:12 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:12.866452 | orchestrator | 2026-03-07 00:56:12 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:12.866465 | orchestrator | 2026-03-07 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:15.915997 | orchestrator | 2026-03-07 00:56:15 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:15.918792 | orchestrator | 2026-03-07 00:56:15 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:15.918904 | orchestrator | 2026-03-07 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:18.969261 | orchestrator | 2026-03-07 00:56:18 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:18.969561 | orchestrator | 2026-03-07 00:56:18 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:18.971400 | orchestrator | 2026-03-07 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:22.018003 | orchestrator | 2026-03-07 00:56:22 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:22.019064 | orchestrator | 2026-03-07 00:56:22 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:22.019111 | orchestrator | 2026-03-07 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:25.069051 | orchestrator | 2026-03-07 00:56:25 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:25.070512 | orchestrator | 2026-03-07 00:56:25 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:25.070563 | orchestrator | 2026-03-07 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:28.118874 | orchestrator | 2026-03-07 00:56:28 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:28.121455 | orchestrator | 2026-03-07 00:56:28 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:28.121547 | orchestrator | 2026-03-07 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:31.171032 | orchestrator | 2026-03-07 00:56:31 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:31.172850 | orchestrator | 2026-03-07 00:56:31 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:31.172900 | orchestrator | 2026-03-07 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:34.218101 | orchestrator | 2026-03-07 00:56:34 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:34.219112 | orchestrator | 2026-03-07 00:56:34 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:34.219154 | orchestrator | 2026-03-07 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:37.269668 | orchestrator | 2026-03-07 00:56:37 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:37.270571 | orchestrator | 2026-03-07 00:56:37 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:37.270611 | orchestrator | 2026-03-07 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:40.308241 | orchestrator | 2026-03-07 00:56:40 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state STARTED 2026-03-07 00:56:40.311158 | orchestrator | 2026-03-07 00:56:40 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:40.311232 | orchestrator | 2026-03-07 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:43.351118 | orchestrator | 2026-03-07 00:56:43 | INFO  | Task 8563f1e5-128f-4928-b840-a6da32a959b2 is in state SUCCESS 2026-03-07 00:56:43.352301 | orchestrator | 2026-03-07 00:56:43.352354 | orchestrator | 2026-03-07 00:56:43.352365 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:56:43.352373 | orchestrator | 2026-03-07 00:56:43.352380 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:56:43.352387 | orchestrator | Saturday 07 March 2026 00:49:01 +0000 (0:00:00.211) 0:00:00.211 ******** 2026-03-07 00:56:43.352394 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.352402 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.352409 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.352416 | orchestrator | 2026-03-07 00:56:43.352423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:56:43.352430 | orchestrator | Saturday 07 March 2026 00:49:02 +0000 (0:00:00.289) 0:00:00.501 ******** 2026-03-07 00:56:43.352437 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-07 00:56:43.352444 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-07 00:56:43.352461 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-07 00:56:43.352468 | orchestrator | 2026-03-07 00:56:43.352474 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-07 00:56:43.352481 | orchestrator | 2026-03-07 00:56:43.352488 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-07 00:56:43.352494 | orchestrator | Saturday 07 March 2026 00:49:02 +0000 (0:00:00.406) 0:00:00.907 ******** 2026-03-07 00:56:43.352501 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.352508 | orchestrator | 2026-03-07 00:56:43.352514 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-07 00:56:43.352521 | orchestrator | Saturday 07 March 2026 00:49:03 +0000 (0:00:00.569) 0:00:01.477 ******** 2026-03-07 00:56:43.352527 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.352534 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.352540 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.352547 | orchestrator | 2026-03-07 00:56:43.352554 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-07 00:56:43.352561 | orchestrator | Saturday 07 March 2026 00:49:03 +0000 (0:00:00.658) 0:00:02.136 ******** 2026-03-07 00:56:43.352568 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.352607 | orchestrator | 2026-03-07 00:56:43.352612 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-07 00:56:43.352616 | orchestrator | Saturday 07 March 2026 00:49:04 +0000 (0:00:00.862) 0:00:02.998 ******** 2026-03-07 00:56:43.352620 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.352624 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.352627 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.352631 | orchestrator | 2026-03-07 00:56:43.352635 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-07 00:56:43.352643 | orchestrator | Saturday 07 March 2026 00:49:05 +0000 (0:00:01.236) 0:00:04.235 ******** 2026-03-07 00:56:43.352647 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:56:43.352651 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:56:43.352655 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:56:43.352658 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:56:43.352662 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:56:43.352678 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:56:43.352682 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-07 00:56:43.352718 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-07 00:56:43.352723 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-07 00:56:43.352727 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-07 00:56:43.352730 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-07 00:56:43.352734 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-07 00:56:43.352738 | orchestrator | 2026-03-07 00:56:43.352742 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-07 00:56:43.352794 | orchestrator | Saturday 07 March 2026 00:49:09 +0000 (0:00:03.846) 0:00:08.081 ******** 2026-03-07 00:56:43.352800 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-07 00:56:43.352805 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-07 00:56:43.352808 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-07 00:56:43.352812 | orchestrator | 2026-03-07 00:56:43.352816 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-07 00:56:43.352820 | orchestrator | Saturday 07 March 2026 00:49:11 +0000 (0:00:01.365) 0:00:09.447 ******** 2026-03-07 00:56:43.352824 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-07 00:56:43.352827 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-07 00:56:43.352831 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-07 00:56:43.352835 | orchestrator | 2026-03-07 00:56:43.352838 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-07 00:56:43.352842 | orchestrator | Saturday 07 March 2026 00:49:13 +0000 (0:00:02.260) 0:00:11.708 ******** 2026-03-07 00:56:43.352846 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-07 00:56:43.352850 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.352862 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-07 00:56:43.352866 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.352870 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-07 00:56:43.352874 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.352878 | orchestrator | 2026-03-07 00:56:43.352881 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-07 00:56:43.352892 | orchestrator | Saturday 07 March 2026 00:49:14 +0000 (0:00:01.510) 0:00:13.218 ******** 2026-03-07 00:56:43.352905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.352914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.352922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.352927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.352932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.352939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.352944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.352951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.352955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.352969 | orchestrator | 2026-03-07 00:56:43.352973 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-07 00:56:43.352978 | orchestrator | Saturday 07 March 2026 00:49:17 +0000 (0:00:02.534) 0:00:15.753 ******** 2026-03-07 00:56:43.352982 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.352986 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.352991 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.352995 | orchestrator | 2026-03-07 00:56:43.353000 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-07 00:56:43.353004 | orchestrator | Saturday 07 March 2026 00:49:19 +0000 (0:00:01.661) 0:00:17.414 ******** 2026-03-07 00:56:43.353008 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-07 00:56:43.353056 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-07 00:56:43.353061 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-07 00:56:43.353065 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-07 00:56:43.353069 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-07 00:56:43.353074 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-07 00:56:43.353078 | orchestrator | 2026-03-07 00:56:43.353082 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-07 00:56:43.353086 | orchestrator | Saturday 07 March 2026 00:49:21 +0000 (0:00:02.302) 0:00:19.717 ******** 2026-03-07 00:56:43.353091 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.353095 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.353099 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.353104 | orchestrator | 2026-03-07 00:56:43.353108 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-07 00:56:43.353115 | orchestrator | Saturday 07 March 2026 00:49:24 +0000 (0:00:02.682) 0:00:22.400 ******** 2026-03-07 00:56:43.353152 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.353159 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.353166 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.353173 | orchestrator | 2026-03-07 00:56:43.353180 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-07 00:56:43.353187 | orchestrator | Saturday 07 March 2026 00:49:27 +0000 (0:00:03.676) 0:00:26.076 ******** 2026-03-07 00:56:43.353195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.353209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.353218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.353227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.353232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.353238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.353245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:56:43.353253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:56:43.353260 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.353326 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.353343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.353356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.353360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.353364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:56:43.353368 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.353372 | orchestrator | 2026-03-07 00:56:43.353376 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-07 00:56:43.353380 | orchestrator | Saturday 07 March 2026 00:49:30 +0000 (0:00:02.766) 0:00:28.843 ******** 2026-03-07 00:56:43.353384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.353413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:56:43.353420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.353433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:56:43.353448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.353469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9', '__omit_place_holder__5bc6e7a7d4e481841fe764a2dad0469821e709d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:56:43.353476 | orchestrator | 2026-03-07 00:56:43.353483 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-07 00:56:43.353489 | orchestrator | Saturday 07 March 2026 00:49:37 +0000 (0:00:07.046) 0:00:35.889 ******** 2026-03-07 00:56:43.353496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.353547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.353552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.353556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.353560 | orchestrator | 2026-03-07 00:56:43.353571 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-07 00:56:43.353575 | orchestrator | Saturday 07 March 2026 00:49:41 +0000 (0:00:03.430) 0:00:39.319 ******** 2026-03-07 00:56:43.353583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-07 00:56:43.353587 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-07 00:56:43.353591 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-07 00:56:43.353595 | orchestrator | 2026-03-07 00:56:43.353599 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-07 00:56:43.353602 | orchestrator | Saturday 07 March 2026 00:49:45 +0000 (0:00:04.250) 0:00:43.570 ******** 2026-03-07 00:56:43.353606 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-07 00:56:43.353621 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-07 00:56:43.353626 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-07 00:56:43.353629 | orchestrator | 2026-03-07 00:56:43.353991 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-07 00:56:43.354005 | orchestrator | Saturday 07 March 2026 00:49:50 +0000 (0:00:05.554) 0:00:49.124 ******** 2026-03-07 00:56:43.354010 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.354080 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.354087 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.354094 | orchestrator | 2026-03-07 00:56:43.354102 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-07 00:56:43.354109 | orchestrator | Saturday 07 March 2026 00:49:51 +0000 (0:00:01.170) 0:00:50.295 ******** 2026-03-07 00:56:43.354113 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-07 00:56:43.354121 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-07 00:56:43.354125 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-07 00:56:43.354129 | orchestrator | 2026-03-07 00:56:43.354133 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-07 00:56:43.354137 | orchestrator | Saturday 07 March 2026 00:49:56 +0000 (0:00:04.425) 0:00:54.721 ******** 2026-03-07 00:56:43.354140 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-07 00:56:43.354144 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-07 00:56:43.354148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-07 00:56:43.354152 | orchestrator | 2026-03-07 00:56:43.354155 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-07 00:56:43.354160 | orchestrator | Saturday 07 March 2026 00:50:01 +0000 (0:00:05.398) 0:01:00.119 ******** 2026-03-07 00:56:43.354166 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-07 00:56:43.354173 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-07 00:56:43.354180 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-07 00:56:43.354186 | orchestrator | 2026-03-07 00:56:43.354192 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-07 00:56:43.354199 | orchestrator | Saturday 07 March 2026 00:50:04 +0000 (0:00:02.443) 0:01:02.563 ******** 2026-03-07 00:56:43.354205 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-07 00:56:43.354212 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-07 00:56:43.354218 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-07 00:56:43.354225 | orchestrator | 2026-03-07 00:56:43.354232 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-07 00:56:43.354245 | orchestrator | Saturday 07 March 2026 00:50:06 +0000 (0:00:02.246) 0:01:04.810 ******** 2026-03-07 00:56:43.354252 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.354256 | orchestrator | 2026-03-07 00:56:43.354260 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-07 00:56:43.354263 | orchestrator | Saturday 07 March 2026 00:50:07 +0000 (0:00:00.855) 0:01:05.666 ******** 2026-03-07 00:56:43.354268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.354273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.354282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.354289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.354293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.354297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.354303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.354307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.354311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.354315 | orchestrator | 2026-03-07 00:56:43.354319 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-07 00:56:43.354323 | orchestrator | Saturday 07 March 2026 00:50:10 +0000 (0:00:03.523) 0:01:09.190 ******** 2026-03-07 00:56:43.354330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354348 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.354352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354364 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.354368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354382 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.354386 | orchestrator | 2026-03-07 00:56:43.354392 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-07 00:56:43.354396 | orchestrator | Saturday 07 March 2026 00:50:12 +0000 (0:00:01.227) 0:01:10.417 ******** 2026-03-07 00:56:43.354400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354454 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.354458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354475 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.354479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354501 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.354505 | orchestrator | 2026-03-07 00:56:43.354509 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-07 00:56:43.354512 | orchestrator | Saturday 07 March 2026 00:50:14 +0000 (0:00:02.446) 0:01:12.864 ******** 2026-03-07 00:56:43.354516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354531 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.354543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354559 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.354565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354582 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.354589 | orchestrator | 2026-03-07 00:56:43.354596 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-07 00:56:43.354603 | orchestrator | Saturday 07 March 2026 00:50:16 +0000 (0:00:02.314) 0:01:15.178 ******** 2026-03-07 00:56:43.354618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354669 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.354673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354687 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.354696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354716 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.354720 | orchestrator | 2026-03-07 00:56:43.354724 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-07 00:56:43.354729 | orchestrator | Saturday 07 March 2026 00:50:17 +0000 (0:00:00.868) 0:01:16.047 ******** 2026-03-07 00:56:43.354734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354759 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.354771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354802 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.354808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354829 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.354845 | orchestrator | 2026-03-07 00:56:43.354851 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-07 00:56:43.354858 | orchestrator | Saturday 07 March 2026 00:50:19 +0000 (0:00:01.329) 0:01:17.376 ******** 2026-03-07 00:56:43.354864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354908 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.354915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354954 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.354963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.354982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.354987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.354992 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.354997 | orchestrator | 2026-03-07 00:56:43.355002 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-07 00:56:43.355009 | orchestrator | Saturday 07 March 2026 00:50:20 +0000 (0:00:01.060) 0:01:18.437 ******** 2026-03-07 00:56:43.355013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.355017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.355020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.355024 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.355034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.355045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.355054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.355058 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.355064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.355068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.355072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.355076 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.355080 | orchestrator | 2026-03-07 00:56:43.355083 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-07 00:56:43.355087 | orchestrator | Saturday 07 March 2026 00:50:20 +0000 (0:00:00.810) 0:01:19.247 ******** 2026-03-07 00:56:43.355091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.355097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.355101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.355105 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.355112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.355120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.355124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.355134 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.355138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:56:43.355144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:56:43.355148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:56:43.355152 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.355156 | orchestrator | 2026-03-07 00:56:43.355160 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-07 00:56:43.355163 | orchestrator | Saturday 07 March 2026 00:50:21 +0000 (0:00:00.889) 0:01:20.137 ******** 2026-03-07 00:56:43.355167 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-07 00:56:43.355171 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-07 00:56:43.355177 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-07 00:56:43.355181 | orchestrator | 2026-03-07 00:56:43.355185 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-07 00:56:43.355188 | orchestrator | Saturday 07 March 2026 00:50:23 +0000 (0:00:02.075) 0:01:22.213 ******** 2026-03-07 00:56:43.355192 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-07 00:56:43.355196 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-07 00:56:43.355200 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-07 00:56:43.355203 | orchestrator | 2026-03-07 00:56:43.355207 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-07 00:56:43.355213 | orchestrator | Saturday 07 March 2026 00:50:25 +0000 (0:00:02.011) 0:01:24.225 ******** 2026-03-07 00:56:43.355217 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 00:56:43.355220 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 00:56:43.355224 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 00:56:43.355228 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 00:56:43.355232 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.355235 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 00:56:43.355239 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.355243 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 00:56:43.355247 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.355255 | orchestrator | 2026-03-07 00:56:43.355259 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-07 00:56:43.355265 | orchestrator | Saturday 07 March 2026 00:50:27 +0000 (0:00:01.451) 0:01:25.677 ******** 2026-03-07 00:56:43.355269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.355273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.355277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-07 00:56:43.355284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.355290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.355322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:56:43.355328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.355332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.355336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:56:43.355340 | orchestrator | 2026-03-07 00:56:43.355344 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-07 00:56:43.355348 | orchestrator | Saturday 07 March 2026 00:50:30 +0000 (0:00:03.582) 0:01:29.259 ******** 2026-03-07 00:56:43.355351 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.355355 | orchestrator | 2026-03-07 00:56:43.355359 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-07 00:56:43.355363 | orchestrator | Saturday 07 March 2026 00:50:31 +0000 (0:00:00.745) 0:01:30.005 ******** 2026-03-07 00:56:43.355367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-07 00:56:43.355374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.355381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-07 00:56:43.355395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.355399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-07 00:56:43.355633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.355637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355645 | orchestrator | 2026-03-07 00:56:43.355648 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-07 00:56:43.355652 | orchestrator | Saturday 07 March 2026 00:50:37 +0000 (0:00:06.292) 0:01:36.297 ******** 2026-03-07 00:56:43.355656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-07 00:56:43.355664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.355669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355680 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.355684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-07 00:56:43.355688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.355692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355699 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.355708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-07 00:56:43.355721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.355728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355741 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.355748 | orchestrator | 2026-03-07 00:56:43.355803 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-07 00:56:43.355810 | orchestrator | Saturday 07 March 2026 00:50:39 +0000 (0:00:01.689) 0:01:37.987 ******** 2026-03-07 00:56:43.355817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:56:43.355824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:56:43.355831 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.355836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:56:43.355840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:56:43.355844 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.355848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:56:43.355852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:56:43.355860 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.355864 | orchestrator | 2026-03-07 00:56:43.355872 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-07 00:56:43.355876 | orchestrator | Saturday 07 March 2026 00:50:40 +0000 (0:00:01.152) 0:01:39.139 ******** 2026-03-07 00:56:43.355879 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.355883 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.355887 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.355890 | orchestrator | 2026-03-07 00:56:43.355894 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-07 00:56:43.355898 | orchestrator | Saturday 07 March 2026 00:50:42 +0000 (0:00:01.667) 0:01:40.807 ******** 2026-03-07 00:56:43.355902 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.355906 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.355909 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.355913 | orchestrator | 2026-03-07 00:56:43.355917 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-07 00:56:43.355925 | orchestrator | Saturday 07 March 2026 00:50:45 +0000 (0:00:03.341) 0:01:44.149 ******** 2026-03-07 00:56:43.355929 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.355932 | orchestrator | 2026-03-07 00:56:43.355936 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-07 00:56:43.355940 | orchestrator | Saturday 07 March 2026 00:50:46 +0000 (0:00:00.995) 0:01:45.144 ******** 2026-03-07 00:56:43.355944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.355949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.355966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.355980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.355991 | orchestrator | 2026-03-07 00:56:43.355995 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-07 00:56:43.355999 | orchestrator | Saturday 07 March 2026 00:50:55 +0000 (0:00:08.537) 0:01:53.682 ******** 2026-03-07 00:56:43.356005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.356011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356019 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.356023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.356027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.356035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356043 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.356049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356056 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.356060 | orchestrator | 2026-03-07 00:56:43.356064 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-07 00:56:43.356068 | orchestrator | Saturday 07 March 2026 00:50:56 +0000 (0:00:01.068) 0:01:54.750 ******** 2026-03-07 00:56:43.356072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356083 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.356086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356094 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.356098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356106 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.356110 | orchestrator | 2026-03-07 00:56:43.356113 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-07 00:56:43.356117 | orchestrator | Saturday 07 March 2026 00:50:58 +0000 (0:00:01.706) 0:01:56.456 ******** 2026-03-07 00:56:43.356121 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.356124 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.356160 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.356180 | orchestrator | 2026-03-07 00:56:43.356185 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-07 00:56:43.356188 | orchestrator | Saturday 07 March 2026 00:50:59 +0000 (0:00:01.456) 0:01:57.913 ******** 2026-03-07 00:56:43.356192 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.356196 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.356200 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.356203 | orchestrator | 2026-03-07 00:56:43.356215 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-07 00:56:43.356220 | orchestrator | Saturday 07 March 2026 00:51:02 +0000 (0:00:02.420) 0:02:00.333 ******** 2026-03-07 00:56:43.356224 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.356227 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.356231 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.356235 | orchestrator | 2026-03-07 00:56:43.356250 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-07 00:56:43.356255 | orchestrator | Saturday 07 March 2026 00:51:02 +0000 (0:00:00.413) 0:02:00.747 ******** 2026-03-07 00:56:43.356260 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.356264 | orchestrator | 2026-03-07 00:56:43.356268 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-07 00:56:43.356272 | orchestrator | Saturday 07 March 2026 00:51:03 +0000 (0:00:01.046) 0:02:01.794 ******** 2026-03-07 00:56:43.356290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-07 00:56:43.356296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-07 00:56:43.356304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-07 00:56:43.356310 | orchestrator | 2026-03-07 00:56:43.356317 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-07 00:56:43.356323 | orchestrator | Saturday 07 March 2026 00:51:06 +0000 (0:00:03.419) 0:02:05.213 ******** 2026-03-07 00:56:43.356333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-07 00:56:43.356340 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.356351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-07 00:56:43.356358 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.356364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-07 00:56:43.356372 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.356376 | orchestrator | 2026-03-07 00:56:43.356380 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-07 00:56:43.356385 | orchestrator | Saturday 07 March 2026 00:51:10 +0000 (0:00:03.334) 0:02:08.547 ******** 2026-03-07 00:56:43.356390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:56:43.356396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:56:43.356401 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.356406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:56:43.356410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:56:43.356415 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.356422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:56:43.356427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:56:43.356432 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.356436 | orchestrator | 2026-03-07 00:56:43.356441 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-07 00:56:43.356448 | orchestrator | Saturday 07 March 2026 00:51:16 +0000 (0:00:06.305) 0:02:14.853 ******** 2026-03-07 00:56:43.356452 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.356458 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.356462 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.356466 | orchestrator | 2026-03-07 00:56:43.356469 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-07 00:56:43.356473 | orchestrator | Saturday 07 March 2026 00:51:18 +0000 (0:00:01.938) 0:02:16.792 ******** 2026-03-07 00:56:43.356477 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.356481 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.356484 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.356488 | orchestrator | 2026-03-07 00:56:43.356492 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-07 00:56:43.356496 | orchestrator | Saturday 07 March 2026 00:51:20 +0000 (0:00:02.285) 0:02:19.078 ******** 2026-03-07 00:56:43.356499 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.356503 | orchestrator | 2026-03-07 00:56:43.356507 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-07 00:56:43.356511 | orchestrator | Saturday 07 March 2026 00:51:22 +0000 (0:00:01.551) 0:02:20.630 ******** 2026-03-07 00:56:43.356515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.356519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.356544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.356562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356578 | orchestrator | 2026-03-07 00:56:43.356582 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-07 00:56:43.356586 | orchestrator | Saturday 07 March 2026 00:51:30 +0000 (0:00:08.347) 0:02:28.977 ******** 2026-03-07 00:56:43.356590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.356594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356612 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.356616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.356620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.356624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356651 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.356655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356659 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.356662 | orchestrator | 2026-03-07 00:56:43.356666 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-07 00:56:43.356670 | orchestrator | Saturday 07 March 2026 00:51:32 +0000 (0:00:01.644) 0:02:30.622 ******** 2026-03-07 00:56:43.356674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356682 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.356689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356697 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.356701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:56:43.356711 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.356714 | orchestrator | 2026-03-07 00:56:43.356718 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-07 00:56:43.356722 | orchestrator | Saturday 07 March 2026 00:51:33 +0000 (0:00:01.417) 0:02:32.039 ******** 2026-03-07 00:56:43.356726 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.356730 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.356733 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.356737 | orchestrator | 2026-03-07 00:56:43.356741 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-07 00:56:43.356746 | orchestrator | Saturday 07 March 2026 00:51:35 +0000 (0:00:01.693) 0:02:33.732 ******** 2026-03-07 00:56:43.356765 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.356772 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.356778 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.356783 | orchestrator | 2026-03-07 00:56:43.356789 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-07 00:56:43.356795 | orchestrator | Saturday 07 March 2026 00:51:37 +0000 (0:00:02.264) 0:02:35.997 ******** 2026-03-07 00:56:43.356801 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.356806 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.356812 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.356827 | orchestrator | 2026-03-07 00:56:43.356833 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-07 00:56:43.356839 | orchestrator | Saturday 07 March 2026 00:51:38 +0000 (0:00:00.641) 0:02:36.638 ******** 2026-03-07 00:56:43.356845 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.356848 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.356852 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.356856 | orchestrator | 2026-03-07 00:56:43.356860 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-07 00:56:43.356864 | orchestrator | Saturday 07 March 2026 00:51:38 +0000 (0:00:00.376) 0:02:37.015 ******** 2026-03-07 00:56:43.356867 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.356871 | orchestrator | 2026-03-07 00:56:43.356875 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-07 00:56:43.356878 | orchestrator | Saturday 07 March 2026 00:51:39 +0000 (0:00:01.039) 0:02:38.055 ******** 2026-03-07 00:56:43.356882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 00:56:43.356891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:56:43.356895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.356924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 00:56:43.356928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:56:43.357104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 00:56:43.357132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:56:43.357158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357222 | orchestrator | 2026-03-07 00:56:43.357228 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-07 00:56:43.357233 | orchestrator | Saturday 07 March 2026 00:51:45 +0000 (0:00:06.137) 0:02:44.192 ******** 2026-03-07 00:56:43.357237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 00:56:43.357244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:56:43.357251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 00:56:43.357265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:56:43.357275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 00:56:43.357306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357316 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.357320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357324 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.357330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:56:43.357337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.357359 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.357363 | orchestrator | 2026-03-07 00:56:43.357378 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-07 00:56:43.357382 | orchestrator | Saturday 07 March 2026 00:51:48 +0000 (0:00:02.201) 0:02:46.394 ******** 2026-03-07 00:56:43.357386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:56:43.357393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:56:43.357400 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.357404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:56:43.357408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:56:43.357412 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.357416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:56:43.357419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:56:43.357423 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.357427 | orchestrator | 2026-03-07 00:56:43.357431 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-07 00:56:43.357434 | orchestrator | Saturday 07 March 2026 00:51:49 +0000 (0:00:01.606) 0:02:48.000 ******** 2026-03-07 00:56:43.357438 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.357442 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.357458 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.357463 | orchestrator | 2026-03-07 00:56:43.357471 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-07 00:56:43.357475 | orchestrator | Saturday 07 March 2026 00:51:51 +0000 (0:00:02.097) 0:02:50.097 ******** 2026-03-07 00:56:43.357479 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.357483 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.357486 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.357490 | orchestrator | 2026-03-07 00:56:43.357494 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-07 00:56:43.357498 | orchestrator | Saturday 07 March 2026 00:51:53 +0000 (0:00:02.087) 0:02:52.185 ******** 2026-03-07 00:56:43.357502 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.357505 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.357509 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.357513 | orchestrator | 2026-03-07 00:56:43.357517 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-07 00:56:43.357520 | orchestrator | Saturday 07 March 2026 00:51:54 +0000 (0:00:00.624) 0:02:52.809 ******** 2026-03-07 00:56:43.357524 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.357528 | orchestrator | 2026-03-07 00:56:43.357532 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-07 00:56:43.357536 | orchestrator | Saturday 07 March 2026 00:51:55 +0000 (0:00:00.934) 0:02:53.744 ******** 2026-03-07 00:56:43.357544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 00:56:43.357554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.357559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 00:56:43.357573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.357582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 00:56:43.357597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.357606 | orchestrator | 2026-03-07 00:56:43.357610 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-07 00:56:43.357614 | orchestrator | Saturday 07 March 2026 00:52:01 +0000 (0:00:05.744) 0:02:59.488 ******** 2026-03-07 00:56:43.357618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 00:56:43.357628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.357636 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.357640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 00:56:43.357647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.357654 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.357661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 00:56:43.357667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.357674 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.357678 | orchestrator | 2026-03-07 00:56:43.357682 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-07 00:56:43.357686 | orchestrator | Saturday 07 March 2026 00:52:05 +0000 (0:00:04.412) 0:03:03.901 ******** 2026-03-07 00:56:43.357691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:56:43.357695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:56:43.357700 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.357703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:56:43.357708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:56:43.357711 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.357715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:56:43.357719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:56:43.357726 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.357730 | orchestrator | 2026-03-07 00:56:43.357733 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-07 00:56:43.357737 | orchestrator | Saturday 07 March 2026 00:52:09 +0000 (0:00:04.066) 0:03:07.967 ******** 2026-03-07 00:56:43.357741 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.357745 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.357773 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.357780 | orchestrator | 2026-03-07 00:56:43.357786 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-07 00:56:43.357793 | orchestrator | Saturday 07 March 2026 00:52:11 +0000 (0:00:01.380) 0:03:09.348 ******** 2026-03-07 00:56:43.357799 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.357805 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.357811 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.357859 | orchestrator | 2026-03-07 00:56:43.357864 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-07 00:56:43.357871 | orchestrator | Saturday 07 March 2026 00:52:13 +0000 (0:00:02.212) 0:03:11.561 ******** 2026-03-07 00:56:43.357880 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.357884 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.357889 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.357893 | orchestrator | 2026-03-07 00:56:43.357898 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-07 00:56:43.357902 | orchestrator | Saturday 07 March 2026 00:52:13 +0000 (0:00:00.600) 0:03:12.161 ******** 2026-03-07 00:56:43.357906 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.357910 | orchestrator | 2026-03-07 00:56:43.357915 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-07 00:56:43.357919 | orchestrator | Saturday 07 March 2026 00:52:14 +0000 (0:00:00.947) 0:03:13.108 ******** 2026-03-07 00:56:43.357927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 00:56:43.357932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 00:56:43.357937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 00:56:43.357944 | orchestrator | 2026-03-07 00:56:43.357949 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-07 00:56:43.357953 | orchestrator | Saturday 07 March 2026 00:52:18 +0000 (0:00:03.633) 0:03:16.741 ******** 2026-03-07 00:56:43.357958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 00:56:43.357964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 00:56:43.357969 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.357973 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.357978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 00:56:43.357983 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.357987 | orchestrator | 2026-03-07 00:56:43.357991 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-07 00:56:43.357996 | orchestrator | Saturday 07 March 2026 00:52:19 +0000 (0:00:00.741) 0:03:17.482 ******** 2026-03-07 00:56:43.358000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:56:43.358005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:56:43.358009 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.358045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:56:43.358063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:56:43.358071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:56:43.358076 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.358080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:56:43.358084 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.358088 | orchestrator | 2026-03-07 00:56:43.358092 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-07 00:56:43.358096 | orchestrator | Saturday 07 March 2026 00:52:19 +0000 (0:00:00.790) 0:03:18.272 ******** 2026-03-07 00:56:43.358100 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.358103 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.358107 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.358111 | orchestrator | 2026-03-07 00:56:43.358115 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-07 00:56:43.358118 | orchestrator | Saturday 07 March 2026 00:52:21 +0000 (0:00:01.417) 0:03:19.690 ******** 2026-03-07 00:56:43.358122 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.358126 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.358129 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.358133 | orchestrator | 2026-03-07 00:56:43.358137 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-07 00:56:43.358141 | orchestrator | Saturday 07 March 2026 00:52:24 +0000 (0:00:02.654) 0:03:22.345 ******** 2026-03-07 00:56:43.358144 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.358148 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.358152 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.358156 | orchestrator | 2026-03-07 00:56:43.358159 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-07 00:56:43.358163 | orchestrator | Saturday 07 March 2026 00:52:24 +0000 (0:00:00.624) 0:03:22.969 ******** 2026-03-07 00:56:43.358167 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.358171 | orchestrator | 2026-03-07 00:56:43.358174 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-07 00:56:43.358178 | orchestrator | Saturday 07 March 2026 00:52:25 +0000 (0:00:01.087) 0:03:24.056 ******** 2026-03-07 00:56:43.358188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 00:56:43.358196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 00:56:43.358207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 00:56:43.358214 | orchestrator | 2026-03-07 00:56:43.358218 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-07 00:56:43.358222 | orchestrator | Saturday 07 March 2026 00:52:31 +0000 (0:00:05.266) 0:03:29.323 ******** 2026-03-07 00:56:43.358229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 00:56:43.358234 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.358242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 00:56:43.358249 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.358256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 00:56:43.358260 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.358264 | orchestrator | 2026-03-07 00:56:43.358268 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-07 00:56:43.358272 | orchestrator | Saturday 07 March 2026 00:52:32 +0000 (0:00:01.383) 0:03:30.706 ******** 2026-03-07 00:56:43.358283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:56:43.358288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:56:43.358296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:56:43.358304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:56:43.358310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-07 00:56:43.358317 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.358324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:56:43.358330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:56:43.358337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:56:43.358345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:56:43.358352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:56:43.358362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:56:43.358369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:56:43.358388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:56:43.358395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-07 00:56:43.358402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-07 00:56:43.358409 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.358416 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.358422 | orchestrator | 2026-03-07 00:56:43.358430 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-07 00:56:43.358434 | orchestrator | Saturday 07 March 2026 00:52:33 +0000 (0:00:01.038) 0:03:31.745 ******** 2026-03-07 00:56:43.358437 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.358441 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.358445 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.358449 | orchestrator | 2026-03-07 00:56:43.358453 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-07 00:56:43.358459 | orchestrator | Saturday 07 March 2026 00:52:34 +0000 (0:00:01.418) 0:03:33.163 ******** 2026-03-07 00:56:43.358474 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.358480 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.358486 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.358492 | orchestrator | 2026-03-07 00:56:43.358498 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-07 00:56:43.358503 | orchestrator | Saturday 07 March 2026 00:52:37 +0000 (0:00:02.244) 0:03:35.408 ******** 2026-03-07 00:56:43.358510 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.358516 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.358522 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.358528 | orchestrator | 2026-03-07 00:56:43.358535 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-07 00:56:43.358540 | orchestrator | Saturday 07 March 2026 00:52:37 +0000 (0:00:00.387) 0:03:35.796 ******** 2026-03-07 00:56:43.358546 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.358552 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.358559 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.358565 | orchestrator | 2026-03-07 00:56:43.358572 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-07 00:56:43.358579 | orchestrator | Saturday 07 March 2026 00:52:38 +0000 (0:00:00.636) 0:03:36.433 ******** 2026-03-07 00:56:43.358585 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.358591 | orchestrator | 2026-03-07 00:56:43.358598 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-07 00:56:43.358604 | orchestrator | Saturday 07 March 2026 00:52:39 +0000 (0:00:01.020) 0:03:37.453 ******** 2026-03-07 00:56:43.358612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 00:56:43.358627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:56:43.358634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:56:43.358639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 00:56:43.358643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 00:56:43.358647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:56:43.358654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:56:43.358661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:56:43.358667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:56:43.358671 | orchestrator | 2026-03-07 00:56:43.358675 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-07 00:56:43.358679 | orchestrator | Saturday 07 March 2026 00:52:43 +0000 (0:00:04.333) 0:03:41.786 ******** 2026-03-07 00:56:43.358683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 00:56:43.358687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:56:43.358694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:56:43.358697 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.358704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 00:56:43.358711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:56:43.358715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:56:43.358719 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.358723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 00:56:43.358729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:56:43.358733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:56:43.358737 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.358741 | orchestrator | 2026-03-07 00:56:43.358797 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-07 00:56:43.358806 | orchestrator | Saturday 07 March 2026 00:52:44 +0000 (0:00:00.645) 0:03:42.432 ******** 2026-03-07 00:56:43.358810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:56:43.358815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:56:43.358819 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.358825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:56:43.358830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:56:43.358834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:56:43.358838 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.358842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:56:43.358846 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.358849 | orchestrator | 2026-03-07 00:56:43.358853 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-07 00:56:43.358857 | orchestrator | Saturday 07 March 2026 00:52:45 +0000 (0:00:00.910) 0:03:43.342 ******** 2026-03-07 00:56:43.358861 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.358919 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.358925 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.358928 | orchestrator | 2026-03-07 00:56:43.358932 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-07 00:56:43.358936 | orchestrator | Saturday 07 March 2026 00:52:46 +0000 (0:00:01.402) 0:03:44.744 ******** 2026-03-07 00:56:43.358940 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.358944 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.358948 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.358952 | orchestrator | 2026-03-07 00:56:43.358955 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-07 00:56:43.358959 | orchestrator | Saturday 07 March 2026 00:52:48 +0000 (0:00:02.362) 0:03:47.107 ******** 2026-03-07 00:56:43.358963 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.358967 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.358971 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.358974 | orchestrator | 2026-03-07 00:56:43.358978 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-07 00:56:43.358982 | orchestrator | Saturday 07 March 2026 00:52:49 +0000 (0:00:00.595) 0:03:47.703 ******** 2026-03-07 00:56:43.358986 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.358990 | orchestrator | 2026-03-07 00:56:43.358993 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-07 00:56:43.358997 | orchestrator | Saturday 07 March 2026 00:52:50 +0000 (0:00:01.122) 0:03:48.826 ******** 2026-03-07 00:56:43.359002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 00:56:43.359014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 00:56:43.359019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 00:56:43.359034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359038 | orchestrator | 2026-03-07 00:56:43.359042 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-07 00:56:43.359047 | orchestrator | Saturday 07 March 2026 00:52:55 +0000 (0:00:04.537) 0:03:53.363 ******** 2026-03-07 00:56:43.359055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 00:56:43.359059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359066 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.359070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 00:56:43.359074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359078 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.359085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 00:56:43.359091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359095 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.359099 | orchestrator | 2026-03-07 00:56:43.359105 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-07 00:56:43.359109 | orchestrator | Saturday 07 March 2026 00:52:56 +0000 (0:00:01.064) 0:03:54.428 ******** 2026-03-07 00:56:43.359113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:56:43.359117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:56:43.359121 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.359125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:56:43.359129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:56:43.359133 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.359136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:56:43.359140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:56:43.359144 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.359148 | orchestrator | 2026-03-07 00:56:43.359151 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-07 00:56:43.359155 | orchestrator | Saturday 07 March 2026 00:52:57 +0000 (0:00:01.060) 0:03:55.489 ******** 2026-03-07 00:56:43.359159 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.359163 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.359167 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.359171 | orchestrator | 2026-03-07 00:56:43.359178 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-07 00:56:43.359184 | orchestrator | Saturday 07 March 2026 00:52:58 +0000 (0:00:01.440) 0:03:56.929 ******** 2026-03-07 00:56:43.359190 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.359197 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.359203 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.359208 | orchestrator | 2026-03-07 00:56:43.359215 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-07 00:56:43.359221 | orchestrator | Saturday 07 March 2026 00:53:00 +0000 (0:00:02.342) 0:03:59.272 ******** 2026-03-07 00:56:43.359228 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.359233 | orchestrator | 2026-03-07 00:56:43.359239 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-07 00:56:43.359245 | orchestrator | Saturday 07 March 2026 00:53:02 +0000 (0:00:01.661) 0:04:00.933 ******** 2026-03-07 00:56:43.359252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-07 00:56:43.359268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-07 00:56:43.359299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-07 00:56:43.359339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359360 | orchestrator | 2026-03-07 00:56:43.359366 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-07 00:56:43.359382 | orchestrator | Saturday 07 March 2026 00:53:06 +0000 (0:00:04.171) 0:04:05.105 ******** 2026-03-07 00:56:43.359393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-07 00:56:43.359404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359451 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.359463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-07 00:56:43.359471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359586 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.359597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-07 00:56:43.359605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.359630 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.359636 | orchestrator | 2026-03-07 00:56:43.359642 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-07 00:56:43.359649 | orchestrator | Saturday 07 March 2026 00:53:07 +0000 (0:00:00.843) 0:04:05.948 ******** 2026-03-07 00:56:43.359656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:56:43.359663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:56:43.359670 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.359677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:56:43.359687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:56:43.359694 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.359701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:56:43.359708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:56:43.359715 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.359721 | orchestrator | 2026-03-07 00:56:43.359747 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-07 00:56:43.359784 | orchestrator | Saturday 07 March 2026 00:53:08 +0000 (0:00:01.302) 0:04:07.251 ******** 2026-03-07 00:56:43.359791 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.359798 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.359804 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.359811 | orchestrator | 2026-03-07 00:56:43.359818 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-07 00:56:43.359825 | orchestrator | Saturday 07 March 2026 00:53:10 +0000 (0:00:01.488) 0:04:08.740 ******** 2026-03-07 00:56:43.359831 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.359838 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.359845 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.359851 | orchestrator | 2026-03-07 00:56:43.359858 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-07 00:56:43.359865 | orchestrator | Saturday 07 March 2026 00:53:12 +0000 (0:00:02.257) 0:04:10.997 ******** 2026-03-07 00:56:43.359872 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.359879 | orchestrator | 2026-03-07 00:56:43.359885 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-07 00:56:43.359892 | orchestrator | Saturday 07 March 2026 00:53:14 +0000 (0:00:01.478) 0:04:12.476 ******** 2026-03-07 00:56:43.359899 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-07 00:56:43.359906 | orchestrator | 2026-03-07 00:56:43.359913 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-07 00:56:43.359926 | orchestrator | Saturday 07 March 2026 00:53:17 +0000 (0:00:03.056) 0:04:15.533 ******** 2026-03-07 00:56:43.359934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:56:43.359953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:56:43.359960 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.360496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:56:43.360513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:56:43.360527 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.360545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:56:43.360561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:56:43.360574 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.360581 | orchestrator | 2026-03-07 00:56:43.360588 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-07 00:56:43.360595 | orchestrator | Saturday 07 March 2026 00:53:19 +0000 (0:00:02.426) 0:04:17.959 ******** 2026-03-07 00:56:43.360602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:56:43.360614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:56:43.360621 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.360633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:56:43.360641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:56:43.360653 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.360660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:56:43.360668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:56:43.360674 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.360680 | orchestrator | 2026-03-07 00:56:43.360686 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-07 00:56:43.360692 | orchestrator | Saturday 07 March 2026 00:53:22 +0000 (0:00:02.934) 0:04:20.894 ******** 2026-03-07 00:56:43.360704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:56:43.360711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:56:43.360721 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.360728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:56:43.360735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:56:43.360741 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.360747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:56:43.360785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:56:43.360791 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.360798 | orchestrator | 2026-03-07 00:56:43.360804 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-07 00:56:43.360810 | orchestrator | Saturday 07 March 2026 00:53:25 +0000 (0:00:03.281) 0:04:24.176 ******** 2026-03-07 00:56:43.360816 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.360822 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.360827 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.360832 | orchestrator | 2026-03-07 00:56:43.360838 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-07 00:56:43.360843 | orchestrator | Saturday 07 March 2026 00:53:27 +0000 (0:00:02.048) 0:04:26.224 ******** 2026-03-07 00:56:43.360849 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.360854 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.360860 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.360866 | orchestrator | 2026-03-07 00:56:43.360876 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-07 00:56:43.360910 | orchestrator | Saturday 07 March 2026 00:53:29 +0000 (0:00:01.766) 0:04:27.991 ******** 2026-03-07 00:56:43.360950 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.360958 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.360965 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.360971 | orchestrator | 2026-03-07 00:56:43.360991 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-07 00:56:43.360999 | orchestrator | Saturday 07 March 2026 00:53:30 +0000 (0:00:00.379) 0:04:28.371 ******** 2026-03-07 00:56:43.361032 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.361040 | orchestrator | 2026-03-07 00:56:43.361046 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-07 00:56:43.361052 | orchestrator | Saturday 07 March 2026 00:53:31 +0000 (0:00:01.466) 0:04:29.837 ******** 2026-03-07 00:56:43.361059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-07 00:56:43.361067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-07 00:56:43.361074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-07 00:56:43.361080 | orchestrator | 2026-03-07 00:56:43.361086 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-07 00:56:43.361093 | orchestrator | Saturday 07 March 2026 00:53:33 +0000 (0:00:01.596) 0:04:31.434 ******** 2026-03-07 00:56:43.361100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-07 00:56:43.361112 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.361138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-07 00:56:43.361146 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.361153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-07 00:56:43.361160 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.361166 | orchestrator | 2026-03-07 00:56:43.361172 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-07 00:56:43.361178 | orchestrator | Saturday 07 March 2026 00:53:33 +0000 (0:00:00.437) 0:04:31.871 ******** 2026-03-07 00:56:43.361185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-07 00:56:43.361193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-07 00:56:43.361200 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.361207 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.361215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-07 00:56:43.361222 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.361229 | orchestrator | 2026-03-07 00:56:43.361238 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-07 00:56:43.361245 | orchestrator | Saturday 07 March 2026 00:53:34 +0000 (0:00:00.957) 0:04:32.829 ******** 2026-03-07 00:56:43.361252 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.361259 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.361266 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.361282 | orchestrator | 2026-03-07 00:56:43.361290 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-07 00:56:43.361298 | orchestrator | Saturday 07 March 2026 00:53:35 +0000 (0:00:00.501) 0:04:33.331 ******** 2026-03-07 00:56:43.361306 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.361314 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.361328 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.361336 | orchestrator | 2026-03-07 00:56:43.361343 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-07 00:56:43.361357 | orchestrator | Saturday 07 March 2026 00:53:36 +0000 (0:00:01.405) 0:04:34.736 ******** 2026-03-07 00:56:43.361365 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.361372 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.361380 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.361387 | orchestrator | 2026-03-07 00:56:43.361394 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-07 00:56:43.361401 | orchestrator | Saturday 07 March 2026 00:53:36 +0000 (0:00:00.372) 0:04:35.110 ******** 2026-03-07 00:56:43.361409 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.361416 | orchestrator | 2026-03-07 00:56:43.361423 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-07 00:56:43.361431 | orchestrator | Saturday 07 March 2026 00:53:38 +0000 (0:00:01.608) 0:04:36.718 ******** 2026-03-07 00:56:43.361451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 00:56:43.361461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 00:56:43.361469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:56:43.361552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:56:43.361574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.361623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.361677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.361707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 00:56:43.361719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.361725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.361788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:56:43.361809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.361816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.361862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.361905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.361912 | orchestrator | 2026-03-07 00:56:43.361918 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-07 00:56:43.361924 | orchestrator | Saturday 07 March 2026 00:53:42 +0000 (0:00:04.564) 0:04:41.283 ******** 2026-03-07 00:56:43.361938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 00:56:43.361944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:56:43.361977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.361987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.361994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 00:56:43.362006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.362039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.362404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:56:43.362425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:56:43.362432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.362466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.362477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.362491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.362498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 00:56:43.362534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.362541 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.362547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.362554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:56:43.362630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.362643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:56:43.362650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.362775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.362783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.362790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.362796 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.362803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.362856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.362869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:56:43.363134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:56:43.363143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.363150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:56:43.363157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:56:43.363164 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.363170 | orchestrator | 2026-03-07 00:56:43.363177 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-07 00:56:43.363189 | orchestrator | Saturday 07 March 2026 00:53:44 +0000 (0:00:01.681) 0:04:42.964 ******** 2026-03-07 00:56:43.363195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:56:43.363206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:56:43.363213 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.363259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:56:43.363269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:56:43.363284 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.363291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:56:43.363297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:56:43.363304 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.363311 | orchestrator | 2026-03-07 00:56:43.363318 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-07 00:56:43.363324 | orchestrator | Saturday 07 March 2026 00:53:46 +0000 (0:00:02.254) 0:04:45.218 ******** 2026-03-07 00:56:43.363331 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.363337 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.363344 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.363350 | orchestrator | 2026-03-07 00:56:43.363357 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-07 00:56:43.363363 | orchestrator | Saturday 07 March 2026 00:53:48 +0000 (0:00:01.338) 0:04:46.556 ******** 2026-03-07 00:56:43.363370 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.363376 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.363383 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.363389 | orchestrator | 2026-03-07 00:56:43.363396 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-07 00:56:43.363402 | orchestrator | Saturday 07 March 2026 00:53:50 +0000 (0:00:02.188) 0:04:48.745 ******** 2026-03-07 00:56:43.363409 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.363416 | orchestrator | 2026-03-07 00:56:43.363422 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-07 00:56:43.363429 | orchestrator | Saturday 07 March 2026 00:53:51 +0000 (0:00:01.441) 0:04:50.186 ******** 2026-03-07 00:56:43.363436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.363451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.363486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.363493 | orchestrator | 2026-03-07 00:56:43.363500 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-07 00:56:43.363507 | orchestrator | Saturday 07 March 2026 00:53:55 +0000 (0:00:03.521) 0:04:53.708 ******** 2026-03-07 00:56:43.363513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.363520 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.363527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.363538 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.363552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.363558 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.363565 | orchestrator | 2026-03-07 00:56:43.363572 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-07 00:56:43.363578 | orchestrator | Saturday 07 March 2026 00:53:55 +0000 (0:00:00.495) 0:04:54.203 ******** 2026-03-07 00:56:43.363597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:56:43.363608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:56:43.363615 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.363639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:56:43.363646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:56:43.363653 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.363659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:56:43.363666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:56:43.363672 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.363679 | orchestrator | 2026-03-07 00:56:43.363685 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-07 00:56:43.363692 | orchestrator | Saturday 07 March 2026 00:53:56 +0000 (0:00:00.718) 0:04:54.922 ******** 2026-03-07 00:56:43.363698 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.363705 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.363711 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.363718 | orchestrator | 2026-03-07 00:56:43.363724 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-07 00:56:43.363731 | orchestrator | Saturday 07 March 2026 00:53:58 +0000 (0:00:01.753) 0:04:56.675 ******** 2026-03-07 00:56:43.363737 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.363744 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.363780 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.363788 | orchestrator | 2026-03-07 00:56:43.363795 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-07 00:56:43.363801 | orchestrator | Saturday 07 March 2026 00:54:00 +0000 (0:00:01.759) 0:04:58.435 ******** 2026-03-07 00:56:43.363813 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.363819 | orchestrator | 2026-03-07 00:56:43.363826 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-07 00:56:43.363832 | orchestrator | Saturday 07 March 2026 00:54:01 +0000 (0:00:01.399) 0:04:59.835 ******** 2026-03-07 00:56:43.363848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.363856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.363885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.363892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.363904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.363911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.363918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.363943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.363951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.363958 | orchestrator | 2026-03-07 00:56:43.363964 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-07 00:56:43.363971 | orchestrator | Saturday 07 March 2026 00:54:06 +0000 (0:00:04.967) 0:05:04.803 ******** 2026-03-07 00:56:43.363989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.363996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.364003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.364010 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.364037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.364045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.364055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.364062 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.364069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.364076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.364102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.364109 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.364116 | orchestrator | 2026-03-07 00:56:43.364123 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-07 00:56:43.364129 | orchestrator | Saturday 07 March 2026 00:54:07 +0000 (0:00:01.377) 0:05:06.180 ******** 2026-03-07 00:56:43.364136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364168 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.364175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364201 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.364208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:56:43.364235 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.364241 | orchestrator | 2026-03-07 00:56:43.364248 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-07 00:56:43.364254 | orchestrator | Saturday 07 March 2026 00:54:08 +0000 (0:00:01.103) 0:05:07.283 ******** 2026-03-07 00:56:43.364261 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.364268 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.364274 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.364280 | orchestrator | 2026-03-07 00:56:43.364287 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-07 00:56:43.364294 | orchestrator | Saturday 07 March 2026 00:54:10 +0000 (0:00:01.468) 0:05:08.752 ******** 2026-03-07 00:56:43.364300 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.364307 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.364313 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.364320 | orchestrator | 2026-03-07 00:56:43.364326 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-07 00:56:43.364332 | orchestrator | Saturday 07 March 2026 00:54:12 +0000 (0:00:02.246) 0:05:10.999 ******** 2026-03-07 00:56:43.364346 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.364352 | orchestrator | 2026-03-07 00:56:43.364365 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-07 00:56:43.364391 | orchestrator | Saturday 07 March 2026 00:54:14 +0000 (0:00:01.787) 0:05:12.786 ******** 2026-03-07 00:56:43.364399 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item=nova-novncproxy) 2026-03-07 00:56:43.364405 | orchestrator | 2026-03-07 00:56:43.364412 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-07 00:56:43.364418 | orchestrator | Saturday 07 March 2026 00:54:15 +0000 (0:00:00.975) 0:05:13.761 ******** 2026-03-07 00:56:43.364425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-07 00:56:43.364432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-07 00:56:43.364439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-07 00:56:43.364446 | orchestrator | 2026-03-07 00:56:43.364452 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-07 00:56:43.364459 | orchestrator | Saturday 07 March 2026 00:54:19 +0000 (0:00:04.492) 0:05:18.254 ******** 2026-03-07 00:56:43.364466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:56:43.364479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:56:43.364486 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.364493 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.364500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:56:43.364514 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.364521 | orchestrator | 2026-03-07 00:56:43.364527 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-07 00:56:43.364534 | orchestrator | Saturday 07 March 2026 00:54:21 +0000 (0:00:01.291) 0:05:19.545 ******** 2026-03-07 00:56:43.364559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:56:43.364567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:56:43.364574 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.364580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:56:43.364587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:56:43.364594 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.364601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:56:43.364607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:56:43.364614 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.364620 | orchestrator | 2026-03-07 00:56:43.364627 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-07 00:56:43.364634 | orchestrator | Saturday 07 March 2026 00:54:22 +0000 (0:00:01.735) 0:05:21.281 ******** 2026-03-07 00:56:43.364640 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.364647 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.364653 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.364660 | orchestrator | 2026-03-07 00:56:43.364666 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-07 00:56:43.364673 | orchestrator | Saturday 07 March 2026 00:54:25 +0000 (0:00:02.749) 0:05:24.030 ******** 2026-03-07 00:56:43.364679 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.364686 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.364692 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.364699 | orchestrator | 2026-03-07 00:56:43.364706 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-07 00:56:43.364712 | orchestrator | Saturday 07 March 2026 00:54:28 +0000 (0:00:03.155) 0:05:27.185 ******** 2026-03-07 00:56:43.364719 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-07 00:56:43.364726 | orchestrator | 2026-03-07 00:56:43.364732 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-07 00:56:43.364739 | orchestrator | Saturday 07 March 2026 00:54:30 +0000 (0:00:01.201) 0:05:28.386 ******** 2026-03-07 00:56:43.364760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:56:43.364768 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.364775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:56:43.364782 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.364810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:56:43.364817 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.364824 | orchestrator | 2026-03-07 00:56:43.364831 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-07 00:56:43.364838 | orchestrator | Saturday 07 March 2026 00:54:31 +0000 (0:00:01.565) 0:05:29.951 ******** 2026-03-07 00:56:43.364844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:56:43.364851 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.364858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:56:43.364865 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.364871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:56:43.364878 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.364888 | orchestrator | 2026-03-07 00:56:43.364895 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-07 00:56:43.364902 | orchestrator | Saturday 07 March 2026 00:54:32 +0000 (0:00:01.240) 0:05:31.192 ******** 2026-03-07 00:56:43.364908 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.364914 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.364921 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.364934 | orchestrator | 2026-03-07 00:56:43.364941 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-07 00:56:43.364947 | orchestrator | Saturday 07 March 2026 00:54:35 +0000 (0:00:02.306) 0:05:33.499 ******** 2026-03-07 00:56:43.364954 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.364960 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.364967 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.364973 | orchestrator | 2026-03-07 00:56:43.364980 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-07 00:56:43.364986 | orchestrator | Saturday 07 March 2026 00:54:37 +0000 (0:00:02.547) 0:05:36.047 ******** 2026-03-07 00:56:43.364993 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.364999 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.365006 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.365012 | orchestrator | 2026-03-07 00:56:43.365018 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-07 00:56:43.365025 | orchestrator | Saturday 07 March 2026 00:54:40 +0000 (0:00:03.245) 0:05:39.293 ******** 2026-03-07 00:56:43.365032 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-07 00:56:43.365038 | orchestrator | 2026-03-07 00:56:43.365045 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-07 00:56:43.365051 | orchestrator | Saturday 07 March 2026 00:54:41 +0000 (0:00:01.008) 0:05:40.301 ******** 2026-03-07 00:56:43.365058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:56:43.365064 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.365092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:56:43.365099 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.365106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:56:43.365113 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.365119 | orchestrator | 2026-03-07 00:56:43.365126 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-07 00:56:43.365136 | orchestrator | Saturday 07 March 2026 00:54:43 +0000 (0:00:01.578) 0:05:41.880 ******** 2026-03-07 00:56:43.365143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:56:43.365150 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.365156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:56:43.365163 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.365170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:56:43.365177 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.365183 | orchestrator | 2026-03-07 00:56:43.365190 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-07 00:56:43.365196 | orchestrator | Saturday 07 March 2026 00:54:45 +0000 (0:00:01.567) 0:05:43.448 ******** 2026-03-07 00:56:43.365203 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.365209 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.365216 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.365222 | orchestrator | 2026-03-07 00:56:43.365229 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-07 00:56:43.365236 | orchestrator | Saturday 07 March 2026 00:54:47 +0000 (0:00:01.859) 0:05:45.308 ******** 2026-03-07 00:56:43.365242 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.365249 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.365255 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.365261 | orchestrator | 2026-03-07 00:56:43.365268 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-07 00:56:43.365274 | orchestrator | Saturday 07 March 2026 00:54:49 +0000 (0:00:02.557) 0:05:47.865 ******** 2026-03-07 00:56:43.365281 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.365287 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.365294 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.365300 | orchestrator | 2026-03-07 00:56:43.365306 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-07 00:56:43.365313 | orchestrator | Saturday 07 March 2026 00:54:53 +0000 (0:00:03.504) 0:05:51.369 ******** 2026-03-07 00:56:43.365319 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.365325 | orchestrator | 2026-03-07 00:56:43.365334 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-07 00:56:43.365343 | orchestrator | Saturday 07 March 2026 00:54:55 +0000 (0:00:02.030) 0:05:53.400 ******** 2026-03-07 00:56:43.365369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.365381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:56:43.365388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.365410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.365436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:56:43.365444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.365463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.365470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:56:43.365551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.365586 | orchestrator | 2026-03-07 00:56:43.365593 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-07 00:56:43.365600 | orchestrator | Saturday 07 March 2026 00:54:59 +0000 (0:00:04.191) 0:05:57.592 ******** 2026-03-07 00:56:43.365607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.365614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:56:43.365621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.365666 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.365673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.365680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:56:43.365687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.365740 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.365747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.365764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:56:43.365771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:56:43.365785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:56:43.365796 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.365802 | orchestrator | 2026-03-07 00:56:43.365809 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-07 00:56:43.365816 | orchestrator | Saturday 07 March 2026 00:55:00 +0000 (0:00:00.828) 0:05:58.420 ******** 2026-03-07 00:56:43.365823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:56:43.365829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:56:43.365839 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.365862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:56:43.365869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:56:43.365876 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.365882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:56:43.365889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:56:43.365896 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.365902 | orchestrator | 2026-03-07 00:56:43.365909 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-07 00:56:43.365916 | orchestrator | Saturday 07 March 2026 00:55:01 +0000 (0:00:01.831) 0:06:00.251 ******** 2026-03-07 00:56:43.365922 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.365929 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.365943 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.365950 | orchestrator | 2026-03-07 00:56:43.365956 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-07 00:56:43.365963 | orchestrator | Saturday 07 March 2026 00:55:03 +0000 (0:00:01.463) 0:06:01.715 ******** 2026-03-07 00:56:43.365969 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.365976 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.365982 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.365989 | orchestrator | 2026-03-07 00:56:43.365995 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-07 00:56:43.366002 | orchestrator | Saturday 07 March 2026 00:55:05 +0000 (0:00:02.313) 0:06:04.029 ******** 2026-03-07 00:56:43.366008 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.366040 | orchestrator | 2026-03-07 00:56:43.366047 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-07 00:56:43.366054 | orchestrator | Saturday 07 March 2026 00:55:07 +0000 (0:00:01.703) 0:06:05.732 ******** 2026-03-07 00:56:43.366068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:56:43.366080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:56:43.366108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:56:43.366116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:56:43.366123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:56:43.366135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:56:43.366142 | orchestrator | 2026-03-07 00:56:43.366149 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-07 00:56:43.366155 | orchestrator | Saturday 07 March 2026 00:55:13 +0000 (0:00:05.906) 0:06:11.638 ******** 2026-03-07 00:56:43.366180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:56:43.366188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:56:43.366195 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.366202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:56:43.366212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:56:43.366219 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.366254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:56:43.366261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:56:43.366269 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.366275 | orchestrator | 2026-03-07 00:56:43.366282 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-07 00:56:43.366288 | orchestrator | Saturday 07 March 2026 00:55:14 +0000 (0:00:00.872) 0:06:12.511 ******** 2026-03-07 00:56:43.366295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-07 00:56:43.366302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:56:43.366313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:56:43.366320 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.366327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-07 00:56:43.366333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:56:43.366347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:56:43.366354 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.366360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-07 00:56:43.366367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:56:43.366374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:56:43.366380 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.366387 | orchestrator | 2026-03-07 00:56:43.366394 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-07 00:56:43.366400 | orchestrator | Saturday 07 March 2026 00:55:15 +0000 (0:00:01.096) 0:06:13.608 ******** 2026-03-07 00:56:43.366407 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.366413 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.366419 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.366425 | orchestrator | 2026-03-07 00:56:43.366431 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-07 00:56:43.366438 | orchestrator | Saturday 07 March 2026 00:55:16 +0000 (0:00:00.972) 0:06:14.580 ******** 2026-03-07 00:56:43.366448 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.366454 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.366460 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.366466 | orchestrator | 2026-03-07 00:56:43.366491 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-07 00:56:43.366498 | orchestrator | Saturday 07 March 2026 00:55:17 +0000 (0:00:01.661) 0:06:16.241 ******** 2026-03-07 00:56:43.366504 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.366511 | orchestrator | 2026-03-07 00:56:43.366517 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-07 00:56:43.366524 | orchestrator | Saturday 07 March 2026 00:55:19 +0000 (0:00:01.658) 0:06:17.899 ******** 2026-03-07 00:56:43.366531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 00:56:43.366542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:56:43.366549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 00:56:43.366574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.366599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:56:43.366606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 00:56:43.366624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:56:43.366638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.366645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.366690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 00:56:43.366706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:56:43.366713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.366742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 00:56:43.366783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:56:43.366792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 00:56:43.366799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:56:43.366833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.366840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.366862 | orchestrator | 2026-03-07 00:56:43.366869 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-07 00:56:43.366875 | orchestrator | Saturday 07 March 2026 00:55:24 +0000 (0:00:04.822) 0:06:22.721 ******** 2026-03-07 00:56:43.366882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-07 00:56:43.366892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:56:43.366907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.366929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-07 00:56:43.366937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:56:43.366944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.366972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.366980 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.366987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-07 00:56:43.366993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:56:43.367001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.367008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.367016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.367034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-07 00:56:43.367042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:56:43.367049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.367056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.367063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.367070 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.367077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-07 00:56:43.367108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:56:43.367116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.367123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.367130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.367145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-07 00:56:43.367158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:56:43.367169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.367182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:56:43.367188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:56:43.367195 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.367202 | orchestrator | 2026-03-07 00:56:43.367209 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-07 00:56:43.367215 | orchestrator | Saturday 07 March 2026 00:55:25 +0000 (0:00:01.405) 0:06:24.127 ******** 2026-03-07 00:56:43.367222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-07 00:56:43.367229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-07 00:56:43.367236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:56:43.367243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:56:43.367250 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.367257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-07 00:56:43.367264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-07 00:56:43.367271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:56:43.367281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:56:43.367287 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.367294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-07 00:56:43.367300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-07 00:56:43.367307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:56:43.367319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:56:43.367326 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.367333 | orchestrator | 2026-03-07 00:56:43.367340 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-07 00:56:43.367346 | orchestrator | Saturday 07 March 2026 00:55:26 +0000 (0:00:01.134) 0:06:25.261 ******** 2026-03-07 00:56:43.367353 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.367359 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.367366 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.367372 | orchestrator | 2026-03-07 00:56:43.367379 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-07 00:56:43.367385 | orchestrator | Saturday 07 March 2026 00:55:27 +0000 (0:00:00.530) 0:06:25.791 ******** 2026-03-07 00:56:43.367392 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.367399 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.367405 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.367411 | orchestrator | 2026-03-07 00:56:43.367418 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-07 00:56:43.367424 | orchestrator | Saturday 07 March 2026 00:55:29 +0000 (0:00:01.733) 0:06:27.525 ******** 2026-03-07 00:56:43.367431 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.367437 | orchestrator | 2026-03-07 00:56:43.367444 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-07 00:56:43.367451 | orchestrator | Saturday 07 March 2026 00:55:31 +0000 (0:00:02.076) 0:06:29.601 ******** 2026-03-07 00:56:43.367457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:56:43.367468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:56:43.367478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:56:43.367486 | orchestrator | 2026-03-07 00:56:43.367495 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-07 00:56:43.367501 | orchestrator | Saturday 07 March 2026 00:55:33 +0000 (0:00:02.689) 0:06:32.291 ******** 2026-03-07 00:56:43.367508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-07 00:56:43.367515 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.367522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-07 00:56:43.367533 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.367539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-07 00:56:43.367546 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.367553 | orchestrator | 2026-03-07 00:56:43.367560 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-07 00:56:43.367566 | orchestrator | Saturday 07 March 2026 00:55:34 +0000 (0:00:00.435) 0:06:32.726 ******** 2026-03-07 00:56:43.367573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-07 00:56:43.367579 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.367586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-07 00:56:43.367592 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.367599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-07 00:56:43.367606 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.367612 | orchestrator | 2026-03-07 00:56:43.367621 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-07 00:56:43.367628 | orchestrator | Saturday 07 March 2026 00:55:35 +0000 (0:00:01.267) 0:06:33.994 ******** 2026-03-07 00:56:43.367637 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.367644 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.367650 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.367657 | orchestrator | 2026-03-07 00:56:43.367663 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-07 00:56:43.367670 | orchestrator | Saturday 07 March 2026 00:55:36 +0000 (0:00:00.478) 0:06:34.472 ******** 2026-03-07 00:56:43.367676 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.367683 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.367689 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.367696 | orchestrator | 2026-03-07 00:56:43.367702 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-07 00:56:43.367709 | orchestrator | Saturday 07 March 2026 00:55:37 +0000 (0:00:01.528) 0:06:36.001 ******** 2026-03-07 00:56:43.367715 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:56:43.367722 | orchestrator | 2026-03-07 00:56:43.367729 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-07 00:56:43.367739 | orchestrator | Saturday 07 March 2026 00:55:39 +0000 (0:00:01.975) 0:06:37.977 ******** 2026-03-07 00:56:43.367746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.367763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.367770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.367790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.367798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.367810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-07 00:56:43.367817 | orchestrator | 2026-03-07 00:56:43.367824 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-07 00:56:43.367831 | orchestrator | Saturday 07 March 2026 00:55:46 +0000 (0:00:06.948) 0:06:44.925 ******** 2026-03-07 00:56:43.367838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.367850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.367857 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.367864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.367875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.367881 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.367888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.367895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-07 00:56:43.367904 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.367911 | orchestrator | 2026-03-07 00:56:43.367918 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-07 00:56:43.367927 | orchestrator | Saturday 07 March 2026 00:55:47 +0000 (0:00:00.751) 0:06:45.677 ******** 2026-03-07 00:56:43.367938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:56:43.367944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:56:43.367951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:56:43.367958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:56:43.367964 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.367971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:56:43.367978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:56:43.367984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:56:43.367991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:56:43.367998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:56:43.368004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:56:43.368011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:56:43.368017 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.368024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:56:43.368031 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.368037 | orchestrator | 2026-03-07 00:56:43.368044 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-07 00:56:43.368050 | orchestrator | Saturday 07 March 2026 00:55:49 +0000 (0:00:01.901) 0:06:47.578 ******** 2026-03-07 00:56:43.368057 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.368063 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.368070 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.368076 | orchestrator | 2026-03-07 00:56:43.368083 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-07 00:56:43.368089 | orchestrator | Saturday 07 March 2026 00:55:50 +0000 (0:00:01.376) 0:06:48.955 ******** 2026-03-07 00:56:43.368096 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.368102 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.368109 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.368120 | orchestrator | 2026-03-07 00:56:43.368127 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-07 00:56:43.368134 | orchestrator | Saturday 07 March 2026 00:55:52 +0000 (0:00:02.255) 0:06:51.211 ******** 2026-03-07 00:56:43.368140 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.368146 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.368153 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.368159 | orchestrator | 2026-03-07 00:56:43.368166 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-07 00:56:43.368173 | orchestrator | Saturday 07 March 2026 00:55:53 +0000 (0:00:00.370) 0:06:51.582 ******** 2026-03-07 00:56:43.368179 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.368186 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.368194 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.368201 | orchestrator | 2026-03-07 00:56:43.368207 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-07 00:56:43.368216 | orchestrator | Saturday 07 March 2026 00:55:53 +0000 (0:00:00.395) 0:06:51.977 ******** 2026-03-07 00:56:43.368223 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.368229 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.368236 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.368242 | orchestrator | 2026-03-07 00:56:43.368249 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-07 00:56:43.368256 | orchestrator | Saturday 07 March 2026 00:55:54 +0000 (0:00:00.748) 0:06:52.726 ******** 2026-03-07 00:56:43.368262 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.368268 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.368275 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.368281 | orchestrator | 2026-03-07 00:56:43.368288 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-07 00:56:43.368295 | orchestrator | Saturday 07 March 2026 00:55:54 +0000 (0:00:00.380) 0:06:53.107 ******** 2026-03-07 00:56:43.368301 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.368308 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.368314 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.368321 | orchestrator | 2026-03-07 00:56:43.368327 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-07 00:56:43.368334 | orchestrator | Saturday 07 March 2026 00:55:55 +0000 (0:00:00.408) 0:06:53.515 ******** 2026-03-07 00:56:43.368340 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.368347 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.368353 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.368360 | orchestrator | 2026-03-07 00:56:43.368366 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-07 00:56:43.368379 | orchestrator | Saturday 07 March 2026 00:55:56 +0000 (0:00:01.020) 0:06:54.536 ******** 2026-03-07 00:56:43.368386 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.368399 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.368405 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.368412 | orchestrator | 2026-03-07 00:56:43.368418 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-07 00:56:43.368425 | orchestrator | Saturday 07 March 2026 00:55:56 +0000 (0:00:00.731) 0:06:55.268 ******** 2026-03-07 00:56:43.368431 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.368438 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.368444 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.368457 | orchestrator | 2026-03-07 00:56:43.368463 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-07 00:56:43.368477 | orchestrator | Saturday 07 March 2026 00:55:57 +0000 (0:00:00.431) 0:06:55.699 ******** 2026-03-07 00:56:43.368484 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.368490 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.368497 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.368503 | orchestrator | 2026-03-07 00:56:43.368514 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-07 00:56:43.368520 | orchestrator | Saturday 07 March 2026 00:55:58 +0000 (0:00:00.995) 0:06:56.695 ******** 2026-03-07 00:56:43.368527 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.368533 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.368540 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.368546 | orchestrator | 2026-03-07 00:56:43.368552 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-07 00:56:43.368559 | orchestrator | Saturday 07 March 2026 00:55:59 +0000 (0:00:01.328) 0:06:58.023 ******** 2026-03-07 00:56:43.368566 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.368572 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.368578 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.368585 | orchestrator | 2026-03-07 00:56:43.368591 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-07 00:56:43.368598 | orchestrator | Saturday 07 March 2026 00:56:00 +0000 (0:00:00.940) 0:06:58.964 ******** 2026-03-07 00:56:43.368605 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.368611 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.368618 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.368624 | orchestrator | 2026-03-07 00:56:43.368631 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-07 00:56:43.368637 | orchestrator | Saturday 07 March 2026 00:56:10 +0000 (0:00:09.802) 0:07:08.767 ******** 2026-03-07 00:56:43.368644 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.368650 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.368657 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.368663 | orchestrator | 2026-03-07 00:56:43.368670 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-07 00:56:43.368676 | orchestrator | Saturday 07 March 2026 00:56:11 +0000 (0:00:00.793) 0:07:09.560 ******** 2026-03-07 00:56:43.368683 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.368689 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.368696 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.368703 | orchestrator | 2026-03-07 00:56:43.368709 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-07 00:56:43.368716 | orchestrator | Saturday 07 March 2026 00:56:23 +0000 (0:00:11.761) 0:07:21.321 ******** 2026-03-07 00:56:43.368722 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.368729 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.368735 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.368742 | orchestrator | 2026-03-07 00:56:43.368757 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-07 00:56:43.368764 | orchestrator | Saturday 07 March 2026 00:56:28 +0000 (0:00:05.252) 0:07:26.574 ******** 2026-03-07 00:56:43.368771 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:56:43.368777 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:56:43.368783 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:56:43.368790 | orchestrator | 2026-03-07 00:56:43.368796 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-07 00:56:43.368803 | orchestrator | Saturday 07 March 2026 00:56:33 +0000 (0:00:04.929) 0:07:31.503 ******** 2026-03-07 00:56:43.368810 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.368816 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.368823 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.368829 | orchestrator | 2026-03-07 00:56:43.368838 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-07 00:56:43.368845 | orchestrator | Saturday 07 March 2026 00:56:33 +0000 (0:00:00.371) 0:07:31.875 ******** 2026-03-07 00:56:43.368851 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.368860 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.368867 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.368873 | orchestrator | 2026-03-07 00:56:43.368880 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-07 00:56:43.368887 | orchestrator | Saturday 07 March 2026 00:56:33 +0000 (0:00:00.394) 0:07:32.270 ******** 2026-03-07 00:56:43.368897 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.368903 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.368921 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.368928 | orchestrator | 2026-03-07 00:56:43.368934 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-07 00:56:43.368941 | orchestrator | Saturday 07 March 2026 00:56:34 +0000 (0:00:00.814) 0:07:33.085 ******** 2026-03-07 00:56:43.368948 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.368960 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.368967 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.368973 | orchestrator | 2026-03-07 00:56:43.368980 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-07 00:56:43.368986 | orchestrator | Saturday 07 March 2026 00:56:35 +0000 (0:00:00.395) 0:07:33.481 ******** 2026-03-07 00:56:43.368993 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.368999 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.369006 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.369012 | orchestrator | 2026-03-07 00:56:43.369019 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-07 00:56:43.369025 | orchestrator | Saturday 07 March 2026 00:56:35 +0000 (0:00:00.477) 0:07:33.958 ******** 2026-03-07 00:56:43.369032 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:56:43.369038 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:56:43.369045 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:56:43.369051 | orchestrator | 2026-03-07 00:56:43.369058 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-07 00:56:43.369064 | orchestrator | Saturday 07 March 2026 00:56:36 +0000 (0:00:00.554) 0:07:34.513 ******** 2026-03-07 00:56:43.369071 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.369077 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.369084 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.369090 | orchestrator | 2026-03-07 00:56:43.369097 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-07 00:56:43.369103 | orchestrator | Saturday 07 March 2026 00:56:39 +0000 (0:00:03.537) 0:07:38.050 ******** 2026-03-07 00:56:43.369109 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:56:43.369116 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:56:43.369122 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:56:43.369129 | orchestrator | 2026-03-07 00:56:43.369135 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:56:43.369142 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-07 00:56:43.369149 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-07 00:56:43.369155 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-07 00:56:43.369162 | orchestrator | 2026-03-07 00:56:43.369169 | orchestrator | 2026-03-07 00:56:43.369175 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:56:43.369182 | orchestrator | Saturday 07 March 2026 00:56:40 +0000 (0:00:00.815) 0:07:38.866 ******** 2026-03-07 00:56:43.369188 | orchestrator | =============================================================================== 2026-03-07 00:56:43.369195 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.76s 2026-03-07 00:56:43.369201 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.80s 2026-03-07 00:56:43.369208 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 8.54s 2026-03-07 00:56:43.369214 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 8.35s 2026-03-07 00:56:43.369221 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 7.05s 2026-03-07 00:56:43.369231 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.95s 2026-03-07 00:56:43.369237 | orchestrator | haproxy-config : Configuring firewall for ceph-rgw ---------------------- 6.31s 2026-03-07 00:56:43.369244 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.29s 2026-03-07 00:56:43.369250 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.14s 2026-03-07 00:56:43.369257 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.91s 2026-03-07 00:56:43.369263 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.74s 2026-03-07 00:56:43.369269 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.55s 2026-03-07 00:56:43.369276 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 5.40s 2026-03-07 00:56:43.369282 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.27s 2026-03-07 00:56:43.369289 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 5.25s 2026-03-07 00:56:43.369295 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.97s 2026-03-07 00:56:43.369301 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.93s 2026-03-07 00:56:43.369307 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.82s 2026-03-07 00:56:43.369317 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.56s 2026-03-07 00:56:43.369323 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.54s 2026-03-07 00:56:43.369332 | orchestrator | 2026-03-07 00:56:43 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:56:43.369338 | orchestrator | 2026-03-07 00:56:43 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:43.369345 | orchestrator | 2026-03-07 00:56:43 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:56:43.369351 | orchestrator | 2026-03-07 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:46.401812 | orchestrator | 2026-03-07 00:56:46 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:56:46.402440 | orchestrator | 2026-03-07 00:56:46 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:46.403395 | orchestrator | 2026-03-07 00:56:46 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:56:46.403422 | orchestrator | 2026-03-07 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:49.467545 | orchestrator | 2026-03-07 00:56:49 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:56:49.469603 | orchestrator | 2026-03-07 00:56:49 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:49.471621 | orchestrator | 2026-03-07 00:56:49 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:56:49.471706 | orchestrator | 2026-03-07 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:52.506991 | orchestrator | 2026-03-07 00:56:52 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:56:52.508336 | orchestrator | 2026-03-07 00:56:52 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:52.509886 | orchestrator | 2026-03-07 00:56:52 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:56:52.510063 | orchestrator | 2026-03-07 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:55.536397 | orchestrator | 2026-03-07 00:56:55 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:56:55.537375 | orchestrator | 2026-03-07 00:56:55 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:55.538682 | orchestrator | 2026-03-07 00:56:55 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:56:55.538743 | orchestrator | 2026-03-07 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:58.575560 | orchestrator | 2026-03-07 00:56:58 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:56:58.579607 | orchestrator | 2026-03-07 00:56:58 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:56:58.582275 | orchestrator | 2026-03-07 00:56:58 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:56:58.582319 | orchestrator | 2026-03-07 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:01.651355 | orchestrator | 2026-03-07 00:57:01 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:01.652215 | orchestrator | 2026-03-07 00:57:01 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:01.653698 | orchestrator | 2026-03-07 00:57:01 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:01.653822 | orchestrator | 2026-03-07 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:04.697356 | orchestrator | 2026-03-07 00:57:04 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:04.698195 | orchestrator | 2026-03-07 00:57:04 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:04.698823 | orchestrator | 2026-03-07 00:57:04 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:04.698913 | orchestrator | 2026-03-07 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:07.737760 | orchestrator | 2026-03-07 00:57:07 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:07.740129 | orchestrator | 2026-03-07 00:57:07 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:07.742106 | orchestrator | 2026-03-07 00:57:07 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:07.742288 | orchestrator | 2026-03-07 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:10.782282 | orchestrator | 2026-03-07 00:57:10 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:10.782496 | orchestrator | 2026-03-07 00:57:10 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:10.784176 | orchestrator | 2026-03-07 00:57:10 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:10.784252 | orchestrator | 2026-03-07 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:13.821636 | orchestrator | 2026-03-07 00:57:13 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:13.823287 | orchestrator | 2026-03-07 00:57:13 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:13.825683 | orchestrator | 2026-03-07 00:57:13 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:13.825700 | orchestrator | 2026-03-07 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:16.867481 | orchestrator | 2026-03-07 00:57:16 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:16.869610 | orchestrator | 2026-03-07 00:57:16 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:16.873004 | orchestrator | 2026-03-07 00:57:16 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:16.873071 | orchestrator | 2026-03-07 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:19.912394 | orchestrator | 2026-03-07 00:57:19 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:19.914865 | orchestrator | 2026-03-07 00:57:19 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:19.917361 | orchestrator | 2026-03-07 00:57:19 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:19.917622 | orchestrator | 2026-03-07 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:22.961026 | orchestrator | 2026-03-07 00:57:22 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:22.965156 | orchestrator | 2026-03-07 00:57:22 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:22.968540 | orchestrator | 2026-03-07 00:57:22 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:22.969102 | orchestrator | 2026-03-07 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:26.012679 | orchestrator | 2026-03-07 00:57:26 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:26.015047 | orchestrator | 2026-03-07 00:57:26 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:26.016945 | orchestrator | 2026-03-07 00:57:26 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:26.016988 | orchestrator | 2026-03-07 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:29.061310 | orchestrator | 2026-03-07 00:57:29 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:29.061999 | orchestrator | 2026-03-07 00:57:29 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:29.063246 | orchestrator | 2026-03-07 00:57:29 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:29.063295 | orchestrator | 2026-03-07 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:32.094783 | orchestrator | 2026-03-07 00:57:32 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:32.095240 | orchestrator | 2026-03-07 00:57:32 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:32.096485 | orchestrator | 2026-03-07 00:57:32 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:32.096512 | orchestrator | 2026-03-07 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:35.133076 | orchestrator | 2026-03-07 00:57:35 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:35.133646 | orchestrator | 2026-03-07 00:57:35 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:35.135549 | orchestrator | 2026-03-07 00:57:35 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:35.135616 | orchestrator | 2026-03-07 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:38.163207 | orchestrator | 2026-03-07 00:57:38 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:38.164145 | orchestrator | 2026-03-07 00:57:38 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:38.165054 | orchestrator | 2026-03-07 00:57:38 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:38.165077 | orchestrator | 2026-03-07 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:41.192547 | orchestrator | 2026-03-07 00:57:41 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:41.192933 | orchestrator | 2026-03-07 00:57:41 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:41.194197 | orchestrator | 2026-03-07 00:57:41 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:41.194221 | orchestrator | 2026-03-07 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:44.235504 | orchestrator | 2026-03-07 00:57:44 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:44.238090 | orchestrator | 2026-03-07 00:57:44 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:44.239836 | orchestrator | 2026-03-07 00:57:44 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:44.239993 | orchestrator | 2026-03-07 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:47.276870 | orchestrator | 2026-03-07 00:57:47 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:47.278784 | orchestrator | 2026-03-07 00:57:47 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:47.279972 | orchestrator | 2026-03-07 00:57:47 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:47.280011 | orchestrator | 2026-03-07 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:50.328508 | orchestrator | 2026-03-07 00:57:50 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:50.333185 | orchestrator | 2026-03-07 00:57:50 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:50.335004 | orchestrator | 2026-03-07 00:57:50 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:50.335078 | orchestrator | 2026-03-07 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:53.389917 | orchestrator | 2026-03-07 00:57:53 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:53.392998 | orchestrator | 2026-03-07 00:57:53 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:53.396157 | orchestrator | 2026-03-07 00:57:53 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:53.397261 | orchestrator | 2026-03-07 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:56.443523 | orchestrator | 2026-03-07 00:57:56 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:56.445344 | orchestrator | 2026-03-07 00:57:56 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:56.447774 | orchestrator | 2026-03-07 00:57:56 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:56.447971 | orchestrator | 2026-03-07 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:59.500498 | orchestrator | 2026-03-07 00:57:59 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:57:59.502152 | orchestrator | 2026-03-07 00:57:59 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:57:59.503704 | orchestrator | 2026-03-07 00:57:59 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:57:59.503777 | orchestrator | 2026-03-07 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:02.544917 | orchestrator | 2026-03-07 00:58:02 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:02.545764 | orchestrator | 2026-03-07 00:58:02 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:02.547649 | orchestrator | 2026-03-07 00:58:02 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:02.547687 | orchestrator | 2026-03-07 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:05.592120 | orchestrator | 2026-03-07 00:58:05 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:05.593515 | orchestrator | 2026-03-07 00:58:05 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:05.596100 | orchestrator | 2026-03-07 00:58:05 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:05.596155 | orchestrator | 2026-03-07 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:08.638377 | orchestrator | 2026-03-07 00:58:08 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:08.639236 | orchestrator | 2026-03-07 00:58:08 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:08.640136 | orchestrator | 2026-03-07 00:58:08 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:08.640244 | orchestrator | 2026-03-07 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:11.690815 | orchestrator | 2026-03-07 00:58:11 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:11.691120 | orchestrator | 2026-03-07 00:58:11 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:11.692657 | orchestrator | 2026-03-07 00:58:11 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:11.692735 | orchestrator | 2026-03-07 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:14.742531 | orchestrator | 2026-03-07 00:58:14 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:14.745370 | orchestrator | 2026-03-07 00:58:14 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:14.748085 | orchestrator | 2026-03-07 00:58:14 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:14.748533 | orchestrator | 2026-03-07 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:17.824795 | orchestrator | 2026-03-07 00:58:17 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:17.827904 | orchestrator | 2026-03-07 00:58:17 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:17.832117 | orchestrator | 2026-03-07 00:58:17 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:17.832190 | orchestrator | 2026-03-07 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:20.899089 | orchestrator | 2026-03-07 00:58:20 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:20.904086 | orchestrator | 2026-03-07 00:58:20 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:20.906668 | orchestrator | 2026-03-07 00:58:20 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:20.906790 | orchestrator | 2026-03-07 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:23.955256 | orchestrator | 2026-03-07 00:58:23 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:23.957081 | orchestrator | 2026-03-07 00:58:23 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:23.958574 | orchestrator | 2026-03-07 00:58:23 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:23.958620 | orchestrator | 2026-03-07 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:26.994165 | orchestrator | 2026-03-07 00:58:26 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:26.995695 | orchestrator | 2026-03-07 00:58:26 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:26.997477 | orchestrator | 2026-03-07 00:58:26 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:26.997522 | orchestrator | 2026-03-07 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:30.037894 | orchestrator | 2026-03-07 00:58:30 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:30.039226 | orchestrator | 2026-03-07 00:58:30 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:30.040521 | orchestrator | 2026-03-07 00:58:30 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:30.040561 | orchestrator | 2026-03-07 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:33.076969 | orchestrator | 2026-03-07 00:58:33 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:33.077411 | orchestrator | 2026-03-07 00:58:33 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:33.079408 | orchestrator | 2026-03-07 00:58:33 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:33.079459 | orchestrator | 2026-03-07 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:36.113067 | orchestrator | 2026-03-07 00:58:36 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:36.113982 | orchestrator | 2026-03-07 00:58:36 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:36.116505 | orchestrator | 2026-03-07 00:58:36 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:36.116604 | orchestrator | 2026-03-07 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:39.155389 | orchestrator | 2026-03-07 00:58:39 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:39.157629 | orchestrator | 2026-03-07 00:58:39 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:39.159942 | orchestrator | 2026-03-07 00:58:39 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:39.160010 | orchestrator | 2026-03-07 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:42.196938 | orchestrator | 2026-03-07 00:58:42 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:42.197498 | orchestrator | 2026-03-07 00:58:42 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:42.198780 | orchestrator | 2026-03-07 00:58:42 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:42.198820 | orchestrator | 2026-03-07 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:45.238541 | orchestrator | 2026-03-07 00:58:45 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:45.240448 | orchestrator | 2026-03-07 00:58:45 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:45.240726 | orchestrator | 2026-03-07 00:58:45 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:45.240759 | orchestrator | 2026-03-07 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:48.288549 | orchestrator | 2026-03-07 00:58:48 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:48.289835 | orchestrator | 2026-03-07 00:58:48 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:48.291573 | orchestrator | 2026-03-07 00:58:48 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:48.291631 | orchestrator | 2026-03-07 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:51.335043 | orchestrator | 2026-03-07 00:58:51 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:51.336075 | orchestrator | 2026-03-07 00:58:51 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:51.337896 | orchestrator | 2026-03-07 00:58:51 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:51.337929 | orchestrator | 2026-03-07 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:54.384405 | orchestrator | 2026-03-07 00:58:54 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:54.385262 | orchestrator | 2026-03-07 00:58:54 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state STARTED 2026-03-07 00:58:54.386492 | orchestrator | 2026-03-07 00:58:54 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:54.386664 | orchestrator | 2026-03-07 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:57.446727 | orchestrator | 2026-03-07 00:58:57 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:58:57.447680 | orchestrator | 2026-03-07 00:58:57 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:58:57.455569 | orchestrator | 2026-03-07 00:58:57 | INFO  | Task 58f3c3b8-7548-464c-bafb-42fd537ca7de is in state SUCCESS 2026-03-07 00:58:57.457365 | orchestrator | 2026-03-07 00:58:57.457429 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-07 00:58:57.457451 | orchestrator | 2.16.14 2026-03-07 00:58:57.457473 | orchestrator | 2026-03-07 00:58:57.457493 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-07 00:58:57.457587 | orchestrator | 2026-03-07 00:58:57.457607 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-07 00:58:57.457627 | orchestrator | Saturday 07 March 2026 00:46:15 +0000 (0:00:00.945) 0:00:00.945 ******** 2026-03-07 00:58:57.457648 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.457668 | orchestrator | 2026-03-07 00:58:57.457687 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-07 00:58:57.457706 | orchestrator | Saturday 07 March 2026 00:46:17 +0000 (0:00:01.549) 0:00:02.495 ******** 2026-03-07 00:58:57.457724 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.457741 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.457760 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.457776 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.457794 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.457813 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.458209 | orchestrator | 2026-03-07 00:58:57.458241 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-07 00:58:57.458262 | orchestrator | Saturday 07 March 2026 00:46:19 +0000 (0:00:02.191) 0:00:04.686 ******** 2026-03-07 00:58:57.458284 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.458304 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.458324 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.458346 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.458366 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.458387 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.458408 | orchestrator | 2026-03-07 00:58:57.458428 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-07 00:58:57.458447 | orchestrator | Saturday 07 March 2026 00:46:20 +0000 (0:00:01.387) 0:00:06.074 ******** 2026-03-07 00:58:57.458467 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.458486 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.458505 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.458524 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.458544 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.458564 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.458584 | orchestrator | 2026-03-07 00:58:57.458605 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-07 00:58:57.458617 | orchestrator | Saturday 07 March 2026 00:46:22 +0000 (0:00:01.117) 0:00:07.192 ******** 2026-03-07 00:58:57.458630 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.458648 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.458666 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.458684 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.458702 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.459522 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.459545 | orchestrator | 2026-03-07 00:58:57.459558 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-07 00:58:57.459570 | orchestrator | Saturday 07 March 2026 00:46:23 +0000 (0:00:01.236) 0:00:08.428 ******** 2026-03-07 00:58:57.459579 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.459589 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.459599 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.459609 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.459619 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.459628 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.459638 | orchestrator | 2026-03-07 00:58:57.459648 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-07 00:58:57.459658 | orchestrator | Saturday 07 March 2026 00:46:24 +0000 (0:00:00.885) 0:00:09.314 ******** 2026-03-07 00:58:57.459668 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.459677 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.459687 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.459696 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.459707 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.459717 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.459726 | orchestrator | 2026-03-07 00:58:57.459736 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-07 00:58:57.459746 | orchestrator | Saturday 07 March 2026 00:46:25 +0000 (0:00:01.767) 0:00:11.082 ******** 2026-03-07 00:58:57.459756 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.459767 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.459777 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.459786 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.459795 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.459805 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.461605 | orchestrator | 2026-03-07 00:58:57.461653 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-07 00:58:57.461664 | orchestrator | Saturday 07 March 2026 00:46:26 +0000 (0:00:00.957) 0:00:12.039 ******** 2026-03-07 00:58:57.461674 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.461685 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.461715 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.461725 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.461734 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.461743 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.461753 | orchestrator | 2026-03-07 00:58:57.461763 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-07 00:58:57.461773 | orchestrator | Saturday 07 March 2026 00:46:28 +0000 (0:00:01.269) 0:00:13.308 ******** 2026-03-07 00:58:57.461783 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 00:58:57.461792 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:58:57.461802 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:58:57.461812 | orchestrator | 2026-03-07 00:58:57.461821 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-07 00:58:57.461831 | orchestrator | Saturday 07 March 2026 00:46:29 +0000 (0:00:00.931) 0:00:14.239 ******** 2026-03-07 00:58:57.461841 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.461876 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.461886 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.462708 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.462776 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.462788 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.462797 | orchestrator | 2026-03-07 00:58:57.462808 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-07 00:58:57.462817 | orchestrator | Saturday 07 March 2026 00:46:30 +0000 (0:00:01.660) 0:00:15.899 ******** 2026-03-07 00:58:57.462827 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 00:58:57.462837 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:58:57.462913 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:58:57.462925 | orchestrator | 2026-03-07 00:58:57.462935 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-07 00:58:57.462945 | orchestrator | Saturday 07 March 2026 00:46:33 +0000 (0:00:03.115) 0:00:19.015 ******** 2026-03-07 00:58:57.462956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-07 00:58:57.462966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-07 00:58:57.462976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-07 00:58:57.462985 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.462995 | orchestrator | 2026-03-07 00:58:57.463004 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-07 00:58:57.463014 | orchestrator | Saturday 07 March 2026 00:46:34 +0000 (0:00:00.978) 0:00:19.994 ******** 2026-03-07 00:58:57.463026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.463040 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.463050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.463060 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.463070 | orchestrator | 2026-03-07 00:58:57.463169 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-07 00:58:57.463207 | orchestrator | Saturday 07 March 2026 00:46:35 +0000 (0:00:01.160) 0:00:21.154 ******** 2026-03-07 00:58:57.463366 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.463390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.463409 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.463426 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.463443 | orchestrator | 2026-03-07 00:58:57.463461 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-07 00:58:57.463479 | orchestrator | Saturday 07 March 2026 00:46:36 +0000 (0:00:00.705) 0:00:21.860 ******** 2026-03-07 00:58:57.463524 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-07 00:46:31.470854', 'end': '2026-03-07 00:46:31.559909', 'delta': '0:00:00.089055', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.463541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-07 00:46:32.265707', 'end': '2026-03-07 00:46:32.381843', 'delta': '0:00:00.116136', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.463553 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-07 00:46:33.395638', 'end': '2026-03-07 00:46:33.504358', 'delta': '0:00:00.108720', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.463563 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.463573 | orchestrator | 2026-03-07 00:58:57.463583 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-07 00:58:57.463601 | orchestrator | Saturday 07 March 2026 00:46:36 +0000 (0:00:00.212) 0:00:22.073 ******** 2026-03-07 00:58:57.463611 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.463622 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.463631 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.463641 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.463650 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.463660 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.463669 | orchestrator | 2026-03-07 00:58:57.463679 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-07 00:58:57.463689 | orchestrator | Saturday 07 March 2026 00:46:39 +0000 (0:00:02.334) 0:00:24.407 ******** 2026-03-07 00:58:57.463698 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:58:57.463708 | orchestrator | 2026-03-07 00:58:57.463717 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-07 00:58:57.463727 | orchestrator | Saturday 07 March 2026 00:46:40 +0000 (0:00:00.920) 0:00:25.328 ******** 2026-03-07 00:58:57.463737 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.463747 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.463756 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.463766 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.463775 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.463785 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.463795 | orchestrator | 2026-03-07 00:58:57.463804 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-07 00:58:57.463814 | orchestrator | Saturday 07 March 2026 00:46:42 +0000 (0:00:02.687) 0:00:28.015 ******** 2026-03-07 00:58:57.463823 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.463833 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.463842 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.463877 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.463887 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.463896 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.463905 | orchestrator | 2026-03-07 00:58:57.463915 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-07 00:58:57.463925 | orchestrator | Saturday 07 March 2026 00:46:46 +0000 (0:00:03.247) 0:00:31.262 ******** 2026-03-07 00:58:57.463934 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.463944 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.463953 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.463963 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.463972 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.463981 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.463991 | orchestrator | 2026-03-07 00:58:57.464000 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-07 00:58:57.464010 | orchestrator | Saturday 07 March 2026 00:46:49 +0000 (0:00:03.162) 0:00:34.425 ******** 2026-03-07 00:58:57.464020 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.464029 | orchestrator | 2026-03-07 00:58:57.464039 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-07 00:58:57.464048 | orchestrator | Saturday 07 March 2026 00:46:49 +0000 (0:00:00.356) 0:00:34.781 ******** 2026-03-07 00:58:57.464058 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.464067 | orchestrator | 2026-03-07 00:58:57.464077 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-07 00:58:57.464086 | orchestrator | Saturday 07 March 2026 00:46:50 +0000 (0:00:00.560) 0:00:35.342 ******** 2026-03-07 00:58:57.464096 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.464111 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.464121 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.464138 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.464148 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.464157 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.464174 | orchestrator | 2026-03-07 00:58:57.464184 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-07 00:58:57.464193 | orchestrator | Saturday 07 March 2026 00:46:52 +0000 (0:00:01.863) 0:00:37.205 ******** 2026-03-07 00:58:57.464203 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.464212 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.464222 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.464231 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.464241 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.464250 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.464260 | orchestrator | 2026-03-07 00:58:57.464269 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-07 00:58:57.464279 | orchestrator | Saturday 07 March 2026 00:46:54 +0000 (0:00:02.285) 0:00:39.490 ******** 2026-03-07 00:58:57.464289 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.464298 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.464308 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.464318 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.464327 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.464336 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.464346 | orchestrator | 2026-03-07 00:58:57.464356 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-07 00:58:57.464365 | orchestrator | Saturday 07 March 2026 00:46:55 +0000 (0:00:01.478) 0:00:40.969 ******** 2026-03-07 00:58:57.464375 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.464384 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.464394 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.464403 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.464413 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.464422 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.464432 | orchestrator | 2026-03-07 00:58:57.464442 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-07 00:58:57.464451 | orchestrator | Saturday 07 March 2026 00:46:57 +0000 (0:00:01.416) 0:00:42.386 ******** 2026-03-07 00:58:57.464461 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.464470 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.464480 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.464489 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.464498 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.464508 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.464517 | orchestrator | 2026-03-07 00:58:57.464527 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-07 00:58:57.464537 | orchestrator | Saturday 07 March 2026 00:46:58 +0000 (0:00:00.881) 0:00:43.267 ******** 2026-03-07 00:58:57.464546 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.464556 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.464566 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.464576 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.464585 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.464594 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.464604 | orchestrator | 2026-03-07 00:58:57.464614 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-07 00:58:57.464623 | orchestrator | Saturday 07 March 2026 00:46:59 +0000 (0:00:01.690) 0:00:44.958 ******** 2026-03-07 00:58:57.464633 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.464643 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.464652 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.464662 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.464671 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.464681 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.464690 | orchestrator | 2026-03-07 00:58:57.464700 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-07 00:58:57.464716 | orchestrator | Saturday 07 March 2026 00:47:00 +0000 (0:00:00.906) 0:00:45.864 ******** 2026-03-07 00:58:57.464727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9f941f3--03bb--56ef--8ac7--c30bc8004c51-osd--block--e9f941f3--03bb--56ef--8ac7--c30bc8004c51', 'dm-uuid-LVM-jiYuCfZIFFLLATdSqMWZs2byf2Hqw9KoUEwdOtxjfUj2xFbqUYee2AMaAjRqF8Gb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6cee2ec4--9e84--549b--8075--e81043ce518c-osd--block--6cee2ec4--9e84--549b--8075--e81043ce518c', 'dm-uuid-LVM-B8bLbepi7zk4LlUHWUoFcpgJuCxmaQP4j5OWt0ye3awuf5KvZzYB8ByFXsEb2OPh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c-osd--block--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c', 'dm-uuid-LVM-8XR3XmOVd2B8PVaNnTDqflfNiRw1uJKWO0Hm3UQTPEfuME0WHh0U21tmJ674G9e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50ec861c--6b17--5421--b6cb--257ea2a8b129-osd--block--50ec861c--6b17--5421--b6cb--257ea2a8b129', 'dm-uuid-LVM-ITJqhhsuUHE2k8u0ISlqfZTbYeEByERaXDaCZ04QKtCaLlQ7frKxmGzqlFMPE1RH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part1', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part14', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part15', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part16', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.464939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e9f941f3--03bb--56ef--8ac7--c30bc8004c51-osd--block--e9f941f3--03bb--56ef--8ac7--c30bc8004c51'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lq9jL1-3czA-ypLx-r35L-ph0k-iv5M-Tpn0zj', 'scsi-0QEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e', 'scsi-SQEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.464974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.464985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6cee2ec4--9e84--549b--8075--e81043ce518c-osd--block--6cee2ec4--9e84--549b--8075--e81043ce518c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7sfVzN-ghhm-9cSP-0Pq1-SUpz-oO0I-1m8yZK', 'scsi-0QEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd', 'scsi-SQEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.464995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f3e458ba--b75f--5cb4--a1c9--e61fe3486295-osd--block--f3e458ba--b75f--5cb4--a1c9--e61fe3486295', 'dm-uuid-LVM-VhQzIXCpqfcn5zxKE5r7ztI1fyiYqLzHEtYkvSf66TMZqdVR7ccCs8N8OfaPuyV8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da', 'scsi-SQEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5cfbeba1--5550--585b--8a7e--42a4921f8eca-osd--block--5cfbeba1--5550--585b--8a7e--42a4921f8eca', 'dm-uuid-LVM-Wq2OW3tsl6jTTaLcKmTav2JwTpRCFqU2JgsK1FfoH8ERtcQ22t3sS9NNXbRdzkph'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part1', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part14', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part15', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part16', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c-osd--block--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kL3PrW-YKUZ-t2Rl-lXXg-ITpx-OegE-g2PFSL', 'scsi-0QEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15', 'scsi-SQEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--50ec861c--6b17--5421--b6cb--257ea2a8b129-osd--block--50ec861c--6b17--5421--b6cb--257ea2a8b129'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PC8ITn-KVPH-rj6x-YF4C-PrRN-sXNg-fVd1gi', 'scsi-0QEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359', 'scsi-SQEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e', 'scsi-SQEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part1', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part14', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part15', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part16', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f3e458ba--b75f--5cb4--a1c9--e61fe3486295-osd--block--f3e458ba--b75f--5cb4--a1c9--e61fe3486295'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8FKuTE-n2yH-Ra88-Nh73-4mV7-nLrM-yos4UV', 'scsi-0QEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103', 'scsi-SQEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5cfbeba1--5550--585b--8a7e--42a4921f8eca-osd--block--5cfbeba1--5550--585b--8a7e--42a4921f8eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fAaf82-4o5V-tENd-n1vK-sRdp-WZdV-yCL7oe', 'scsi-0QEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc', 'scsi-SQEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d', 'scsi-SQEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465400 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.465422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740', 'scsi-SQEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9', 'scsi-SQEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part1', 'scsi-SQEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part14', 'scsi-SQEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part15', 'scsi-SQEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part16', 'scsi-SQEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465620 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.465630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465640 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.465650 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.465660 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.465670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:58:57.465768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea', 'scsi-SQEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:58:57.465812 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.465822 | orchestrator | 2026-03-07 00:58:57.465832 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-07 00:58:57.465842 | orchestrator | Saturday 07 March 2026 00:47:04 +0000 (0:00:04.061) 0:00:49.926 ******** 2026-03-07 00:58:57.465879 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9f941f3--03bb--56ef--8ac7--c30bc8004c51-osd--block--e9f941f3--03bb--56ef--8ac7--c30bc8004c51', 'dm-uuid-LVM-jiYuCfZIFFLLATdSqMWZs2byf2Hqw9KoUEwdOtxjfUj2xFbqUYee2AMaAjRqF8Gb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.465890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6cee2ec4--9e84--549b--8075--e81043ce518c-osd--block--6cee2ec4--9e84--549b--8075--e81043ce518c', 'dm-uuid-LVM-B8bLbepi7zk4LlUHWUoFcpgJuCxmaQP4j5OWt0ye3awuf5KvZzYB8ByFXsEb2OPh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.465901 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.465911 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.465921 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.465943 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.465960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.465970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.465980 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.465990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.466000 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c-osd--block--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c', 'dm-uuid-LVM-8XR3XmOVd2B8PVaNnTDqflfNiRw1uJKWO0Hm3UQTPEfuME0WHh0U21tmJ674G9e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.466077 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part1', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part14', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part15', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part16', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467169 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50ec861c--6b17--5421--b6cb--257ea2a8b129-osd--block--50ec861c--6b17--5421--b6cb--257ea2a8b129', 'dm-uuid-LVM-ITJqhhsuUHE2k8u0ISlqfZTbYeEByERaXDaCZ04QKtCaLlQ7frKxmGzqlFMPE1RH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e9f941f3--03bb--56ef--8ac7--c30bc8004c51-osd--block--e9f941f3--03bb--56ef--8ac7--c30bc8004c51'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lq9jL1-3czA-ypLx-r35L-ph0k-iv5M-Tpn0zj', 'scsi-0QEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e', 'scsi-SQEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467300 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6cee2ec4--9e84--549b--8075--e81043ce518c-osd--block--6cee2ec4--9e84--549b--8075--e81043ce518c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7sfVzN-ghhm-9cSP-0Pq1-SUpz-oO0I-1m8yZK', 'scsi-0QEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd', 'scsi-SQEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467352 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da', 'scsi-SQEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467396 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467409 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467436 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467449 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467460 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467505 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f3e458ba--b75f--5cb4--a1c9--e61fe3486295-osd--block--f3e458ba--b75f--5cb4--a1c9--e61fe3486295', 'dm-uuid-LVM-VhQzIXCpqfcn5zxKE5r7ztI1fyiYqLzHEtYkvSf66TMZqdVR7ccCs8N8OfaPuyV8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467540 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part1', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part14', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part15', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part16', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467563 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5cfbeba1--5550--585b--8a7e--42a4921f8eca-osd--block--5cfbeba1--5550--585b--8a7e--42a4921f8eca', 'dm-uuid-LVM-Wq2OW3tsl6jTTaLcKmTav2JwTpRCFqU2JgsK1FfoH8ERtcQ22t3sS9NNXbRdzkph'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467575 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c-osd--block--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kL3PrW-YKUZ-t2Rl-lXXg-ITpx-OegE-g2PFSL', 'scsi-0QEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15', 'scsi-SQEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467595 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.467614 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--50ec861c--6b17--5421--b6cb--257ea2a8b129-osd--block--50ec861c--6b17--5421--b6cb--257ea2a8b129'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PC8ITn-KVPH-rj6x-YF4C-PrRN-sXNg-fVd1gi', 'scsi-0QEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359', 'scsi-SQEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467626 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467638 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e', 'scsi-SQEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467657 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467672 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467711 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467726 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467740 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467762 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467778 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467793 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467814 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467833 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467870 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467885 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467916 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740', 'scsi-SQEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d86149b-0ba4-4a58-9fc2-b00d0a760740-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467941 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.467988 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468010 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part1', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part14', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part15', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part16', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468032 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f3e458ba--b75f--5cb4--a1c9--e61fe3486295-osd--block--f3e458ba--b75f--5cb4--a1c9--e61fe3486295'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8FKuTE-n2yH-Ra88-Nh73-4mV7-nLrM-yos4UV', 'scsi-0QEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103', 'scsi-SQEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468052 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5cfbeba1--5550--585b--8a7e--42a4921f8eca-osd--block--5cfbeba1--5550--585b--8a7e--42a4921f8eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fAaf82-4o5V-tENd-n1vK-sRdp-WZdV-yCL7oe', 'scsi-0QEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc', 'scsi-SQEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468071 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d', 'scsi-SQEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468089 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468100 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.468120 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468141 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468174 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468196 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.468212 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.468230 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468262 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468280 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468313 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468333 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468351 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468388 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468433 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9', 'scsi-SQEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part1', 'scsi-SQEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part14', 'scsi-SQEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part15', 'scsi-SQEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part16', 'scsi-SQEMU_QEMU_HARDDISK_63d08656-5fe3-4965-96b2-d9d7b897e8d9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468457 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468484 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-03-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468497 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.468508 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468529 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468541 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468559 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468571 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468592 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea', 'scsi-SQEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e87ab8c-74da-42e5-bd8a-bfd4a87775ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468612 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:58:57.468630 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.468644 | orchestrator | 2026-03-07 00:58:57.468657 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-07 00:58:57.468671 | orchestrator | Saturday 07 March 2026 00:47:08 +0000 (0:00:03.880) 0:00:53.806 ******** 2026-03-07 00:58:57.468682 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.468694 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.468707 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.468718 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.468729 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.468740 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.468752 | orchestrator | 2026-03-07 00:58:57.468782 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-07 00:58:57.468806 | orchestrator | Saturday 07 March 2026 00:47:11 +0000 (0:00:02.716) 0:00:56.522 ******** 2026-03-07 00:58:57.468817 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.468828 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.468840 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.468912 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.468925 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.468936 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.468948 | orchestrator | 2026-03-07 00:58:57.468961 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-07 00:58:57.468972 | orchestrator | Saturday 07 March 2026 00:47:12 +0000 (0:00:01.100) 0:00:57.623 ******** 2026-03-07 00:58:57.468984 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.468995 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.469007 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.469019 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.469031 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.469044 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.469067 | orchestrator | 2026-03-07 00:58:57.469079 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-07 00:58:57.469091 | orchestrator | Saturday 07 March 2026 00:47:13 +0000 (0:00:01.325) 0:00:58.949 ******** 2026-03-07 00:58:57.469112 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.469132 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.469151 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.469174 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.469198 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.469219 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.469238 | orchestrator | 2026-03-07 00:58:57.469250 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-07 00:58:57.469262 | orchestrator | Saturday 07 March 2026 00:47:14 +0000 (0:00:01.050) 0:00:59.999 ******** 2026-03-07 00:58:57.469274 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.469287 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.469300 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.469311 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.469322 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.469334 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.469346 | orchestrator | 2026-03-07 00:58:57.469371 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-07 00:58:57.469384 | orchestrator | Saturday 07 March 2026 00:47:16 +0000 (0:00:02.154) 0:01:02.154 ******** 2026-03-07 00:58:57.469395 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.469406 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.469419 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.469430 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.469442 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.469454 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.469467 | orchestrator | 2026-03-07 00:58:57.469480 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-07 00:58:57.469491 | orchestrator | Saturday 07 March 2026 00:47:18 +0000 (0:00:01.184) 0:01:03.338 ******** 2026-03-07 00:58:57.469504 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-07 00:58:57.469517 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-07 00:58:57.469528 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-07 00:58:57.469540 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-07 00:58:57.469552 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-07 00:58:57.469563 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-07 00:58:57.469575 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-07 00:58:57.469587 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-07 00:58:57.469599 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-07 00:58:57.469610 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-07 00:58:57.469621 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-07 00:58:57.469632 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-07 00:58:57.469644 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-07 00:58:57.469655 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-07 00:58:57.469668 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-07 00:58:57.469680 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-07 00:58:57.469691 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-07 00:58:57.469703 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-07 00:58:57.469714 | orchestrator | 2026-03-07 00:58:57.469725 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-07 00:58:57.469737 | orchestrator | Saturday 07 March 2026 00:47:22 +0000 (0:00:04.649) 0:01:07.988 ******** 2026-03-07 00:58:57.469749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-07 00:58:57.469774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-07 00:58:57.469785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-07 00:58:57.469797 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.469809 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-07 00:58:57.469822 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-07 00:58:57.469833 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-07 00:58:57.469906 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.469924 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-07 00:58:57.469936 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-07 00:58:57.469948 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-07 00:58:57.469959 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.469970 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:58:57.469981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:58:57.469993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:58:57.470005 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.470060 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-07 00:58:57.470078 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-07 00:58:57.470089 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-07 00:58:57.470101 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.470113 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-07 00:58:57.470126 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-07 00:58:57.470137 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-07 00:58:57.470150 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.470161 | orchestrator | 2026-03-07 00:58:57.470174 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-07 00:58:57.470187 | orchestrator | Saturday 07 March 2026 00:47:24 +0000 (0:00:01.423) 0:01:09.411 ******** 2026-03-07 00:58:57.470198 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.470210 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.470222 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.470235 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.470247 | orchestrator | 2026-03-07 00:58:57.470259 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-07 00:58:57.470272 | orchestrator | Saturday 07 March 2026 00:47:26 +0000 (0:00:02.455) 0:01:11.866 ******** 2026-03-07 00:58:57.470284 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.470296 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.470307 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.470319 | orchestrator | 2026-03-07 00:58:57.470330 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-07 00:58:57.470341 | orchestrator | Saturday 07 March 2026 00:47:27 +0000 (0:00:00.628) 0:01:12.495 ******** 2026-03-07 00:58:57.470353 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.470365 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.470376 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.470389 | orchestrator | 2026-03-07 00:58:57.470424 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-07 00:58:57.470438 | orchestrator | Saturday 07 March 2026 00:47:28 +0000 (0:00:00.765) 0:01:13.260 ******** 2026-03-07 00:58:57.470450 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.470461 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.470473 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.470484 | orchestrator | 2026-03-07 00:58:57.470496 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-07 00:58:57.470521 | orchestrator | Saturday 07 March 2026 00:47:29 +0000 (0:00:01.119) 0:01:14.380 ******** 2026-03-07 00:58:57.470533 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.470545 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.470556 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.470569 | orchestrator | 2026-03-07 00:58:57.470581 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-07 00:58:57.470593 | orchestrator | Saturday 07 March 2026 00:47:30 +0000 (0:00:00.820) 0:01:15.201 ******** 2026-03-07 00:58:57.470604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.470616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.470627 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.470639 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.470651 | orchestrator | 2026-03-07 00:58:57.470662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-07 00:58:57.470674 | orchestrator | Saturday 07 March 2026 00:47:31 +0000 (0:00:01.046) 0:01:16.248 ******** 2026-03-07 00:58:57.470686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.470698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.470710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.470721 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.470733 | orchestrator | 2026-03-07 00:58:57.470745 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-07 00:58:57.470757 | orchestrator | Saturday 07 March 2026 00:47:31 +0000 (0:00:00.760) 0:01:17.008 ******** 2026-03-07 00:58:57.470769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.470781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.470793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.470804 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.470817 | orchestrator | 2026-03-07 00:58:57.470828 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-07 00:58:57.470840 | orchestrator | Saturday 07 March 2026 00:47:32 +0000 (0:00:00.503) 0:01:17.512 ******** 2026-03-07 00:58:57.470878 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.470890 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.470901 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.470912 | orchestrator | 2026-03-07 00:58:57.470923 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-07 00:58:57.470944 | orchestrator | Saturday 07 March 2026 00:47:32 +0000 (0:00:00.394) 0:01:17.907 ******** 2026-03-07 00:58:57.470957 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-07 00:58:57.470969 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-07 00:58:57.470980 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-07 00:58:57.470991 | orchestrator | 2026-03-07 00:58:57.471003 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-07 00:58:57.471015 | orchestrator | Saturday 07 March 2026 00:47:34 +0000 (0:00:01.439) 0:01:19.346 ******** 2026-03-07 00:58:57.471026 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 00:58:57.471038 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:58:57.471049 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:58:57.471061 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-07 00:58:57.471072 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-07 00:58:57.471084 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-07 00:58:57.471095 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-07 00:58:57.471118 | orchestrator | 2026-03-07 00:58:57.471131 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-07 00:58:57.471143 | orchestrator | Saturday 07 March 2026 00:47:35 +0000 (0:00:00.938) 0:01:20.284 ******** 2026-03-07 00:58:57.471154 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 00:58:57.471166 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:58:57.471177 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:58:57.471189 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-07 00:58:57.471200 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-07 00:58:57.471211 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-07 00:58:57.471223 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-07 00:58:57.471234 | orchestrator | 2026-03-07 00:58:57.471245 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:58:57.471256 | orchestrator | Saturday 07 March 2026 00:47:37 +0000 (0:00:02.256) 0:01:22.541 ******** 2026-03-07 00:58:57.471280 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.471293 | orchestrator | 2026-03-07 00:58:57.471304 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:58:57.471315 | orchestrator | Saturday 07 March 2026 00:47:38 +0000 (0:00:01.541) 0:01:24.083 ******** 2026-03-07 00:58:57.471327 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.471338 | orchestrator | 2026-03-07 00:58:57.471349 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:58:57.471360 | orchestrator | Saturday 07 March 2026 00:47:40 +0000 (0:00:01.624) 0:01:25.708 ******** 2026-03-07 00:58:57.471371 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.471382 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.471393 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.471404 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.471415 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.471426 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.471437 | orchestrator | 2026-03-07 00:58:57.471448 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:58:57.471459 | orchestrator | Saturday 07 March 2026 00:47:42 +0000 (0:00:01.750) 0:01:27.458 ******** 2026-03-07 00:58:57.471470 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.471481 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.471492 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.471503 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.471514 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.471525 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.471536 | orchestrator | 2026-03-07 00:58:57.471548 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:58:57.471558 | orchestrator | Saturday 07 March 2026 00:47:43 +0000 (0:00:01.124) 0:01:28.583 ******** 2026-03-07 00:58:57.471569 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.471581 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.471591 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.471603 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.471614 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.471625 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.471636 | orchestrator | 2026-03-07 00:58:57.471648 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:58:57.471659 | orchestrator | Saturday 07 March 2026 00:47:45 +0000 (0:00:01.703) 0:01:30.286 ******** 2026-03-07 00:58:57.471680 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.471692 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.471704 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.471715 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.471725 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.471737 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.471748 | orchestrator | 2026-03-07 00:58:57.471758 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:58:57.471770 | orchestrator | Saturday 07 March 2026 00:47:46 +0000 (0:00:00.926) 0:01:31.213 ******** 2026-03-07 00:58:57.471780 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.471791 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.471808 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.471820 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.471832 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.471843 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.471874 | orchestrator | 2026-03-07 00:58:57.471886 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:58:57.471897 | orchestrator | Saturday 07 March 2026 00:47:47 +0000 (0:00:01.514) 0:01:32.727 ******** 2026-03-07 00:58:57.471908 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.471920 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.471932 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.471942 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.471953 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.471964 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.471975 | orchestrator | 2026-03-07 00:58:57.471986 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:58:57.471997 | orchestrator | Saturday 07 March 2026 00:47:48 +0000 (0:00:01.072) 0:01:33.799 ******** 2026-03-07 00:58:57.472009 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.472020 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.472031 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.472042 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.472053 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.472063 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.472074 | orchestrator | 2026-03-07 00:58:57.472085 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:58:57.472096 | orchestrator | Saturday 07 March 2026 00:47:49 +0000 (0:00:01.216) 0:01:35.016 ******** 2026-03-07 00:58:57.472107 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.472119 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.472130 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.472141 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.472152 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.472163 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.472174 | orchestrator | 2026-03-07 00:58:57.472185 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:58:57.472196 | orchestrator | Saturday 07 March 2026 00:47:51 +0000 (0:00:01.229) 0:01:36.246 ******** 2026-03-07 00:58:57.472207 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.472218 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.472229 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.472240 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.472251 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.472261 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.472272 | orchestrator | 2026-03-07 00:58:57.472284 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:58:57.472295 | orchestrator | Saturday 07 March 2026 00:47:52 +0000 (0:00:01.594) 0:01:37.841 ******** 2026-03-07 00:58:57.472306 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.472317 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.472328 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.472339 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.472363 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.472374 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.472394 | orchestrator | 2026-03-07 00:58:57.472406 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:58:57.472417 | orchestrator | Saturday 07 March 2026 00:47:54 +0000 (0:00:01.367) 0:01:39.209 ******** 2026-03-07 00:58:57.472428 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.472439 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.472450 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.472461 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.472472 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.472483 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.472494 | orchestrator | 2026-03-07 00:58:57.472505 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:58:57.472517 | orchestrator | Saturday 07 March 2026 00:47:55 +0000 (0:00:01.941) 0:01:41.151 ******** 2026-03-07 00:58:57.472528 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.472539 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.472550 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.472560 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.472571 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.472583 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.472600 | orchestrator | 2026-03-07 00:58:57.472620 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:58:57.472648 | orchestrator | Saturday 07 March 2026 00:47:57 +0000 (0:00:01.386) 0:01:42.537 ******** 2026-03-07 00:58:57.472671 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.472691 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.472709 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.472728 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.472744 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.472762 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.472781 | orchestrator | 2026-03-07 00:58:57.472800 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:58:57.472822 | orchestrator | Saturday 07 March 2026 00:47:58 +0000 (0:00:01.253) 0:01:43.791 ******** 2026-03-07 00:58:57.472842 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.472877 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.472888 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.472899 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.472910 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.472921 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.472931 | orchestrator | 2026-03-07 00:58:57.472942 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:58:57.472953 | orchestrator | Saturday 07 March 2026 00:47:59 +0000 (0:00:00.824) 0:01:44.616 ******** 2026-03-07 00:58:57.472964 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.472976 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.472987 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.472998 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.473009 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.473020 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.473030 | orchestrator | 2026-03-07 00:58:57.473042 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:58:57.473053 | orchestrator | Saturday 07 March 2026 00:48:00 +0000 (0:00:01.216) 0:01:45.832 ******** 2026-03-07 00:58:57.473063 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.473083 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.473095 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.473105 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.473116 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.473127 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.473138 | orchestrator | 2026-03-07 00:58:57.473149 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:58:57.473171 | orchestrator | Saturday 07 March 2026 00:48:02 +0000 (0:00:01.464) 0:01:47.296 ******** 2026-03-07 00:58:57.473183 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.473193 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.473204 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.473215 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.473226 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.473236 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.473247 | orchestrator | 2026-03-07 00:58:57.473258 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:58:57.473269 | orchestrator | Saturday 07 March 2026 00:48:04 +0000 (0:00:02.153) 0:01:49.449 ******** 2026-03-07 00:58:57.473280 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.473291 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.473301 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.473312 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.473322 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.473333 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.473344 | orchestrator | 2026-03-07 00:58:57.473355 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:58:57.473366 | orchestrator | Saturday 07 March 2026 00:48:05 +0000 (0:00:01.691) 0:01:51.141 ******** 2026-03-07 00:58:57.473376 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.473387 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.473398 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.473408 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.473419 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.473429 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.473440 | orchestrator | 2026-03-07 00:58:57.473451 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-07 00:58:57.473462 | orchestrator | Saturday 07 March 2026 00:48:07 +0000 (0:00:01.892) 0:01:53.034 ******** 2026-03-07 00:58:57.473473 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.473484 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.473494 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.473505 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.473516 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.473527 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.473538 | orchestrator | 2026-03-07 00:58:57.473550 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-07 00:58:57.473561 | orchestrator | Saturday 07 March 2026 00:48:09 +0000 (0:00:01.968) 0:01:55.003 ******** 2026-03-07 00:58:57.473573 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.473584 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.473595 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.473606 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.473617 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.473641 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.473652 | orchestrator | 2026-03-07 00:58:57.473663 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-07 00:58:57.473674 | orchestrator | Saturday 07 March 2026 00:48:12 +0000 (0:00:02.711) 0:01:57.714 ******** 2026-03-07 00:58:57.473686 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.473697 | orchestrator | 2026-03-07 00:58:57.473709 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-07 00:58:57.473720 | orchestrator | Saturday 07 March 2026 00:48:13 +0000 (0:00:01.402) 0:01:59.116 ******** 2026-03-07 00:58:57.473731 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.473742 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.473753 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.473763 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.473774 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.473800 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.473811 | orchestrator | 2026-03-07 00:58:57.473831 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-07 00:58:57.473925 | orchestrator | Saturday 07 March 2026 00:48:14 +0000 (0:00:00.694) 0:01:59.811 ******** 2026-03-07 00:58:57.473949 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.473967 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.473983 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.474000 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.474083 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.474108 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.474126 | orchestrator | 2026-03-07 00:58:57.474139 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-07 00:58:57.474151 | orchestrator | Saturday 07 March 2026 00:48:15 +0000 (0:00:01.018) 0:02:00.829 ******** 2026-03-07 00:58:57.474162 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:58:57.474172 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:58:57.474183 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:58:57.474194 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:58:57.474204 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:58:57.474215 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:58:57.474226 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:58:57.474237 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:58:57.474247 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:58:57.474267 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:58:57.474278 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:58:57.474289 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:58:57.474300 | orchestrator | 2026-03-07 00:58:57.474311 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-07 00:58:57.474321 | orchestrator | Saturday 07 March 2026 00:48:17 +0000 (0:00:01.645) 0:02:02.475 ******** 2026-03-07 00:58:57.474332 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.474342 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.474353 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.474364 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.474374 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.474385 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.474395 | orchestrator | 2026-03-07 00:58:57.474407 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-07 00:58:57.474418 | orchestrator | Saturday 07 March 2026 00:48:18 +0000 (0:00:01.644) 0:02:04.119 ******** 2026-03-07 00:58:57.474428 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.474439 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.474449 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.474460 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.474471 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.474481 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.474491 | orchestrator | 2026-03-07 00:58:57.474501 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-07 00:58:57.474510 | orchestrator | Saturday 07 March 2026 00:48:19 +0000 (0:00:00.820) 0:02:04.939 ******** 2026-03-07 00:58:57.474520 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.474530 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.474539 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.474559 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.474569 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.474578 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.474588 | orchestrator | 2026-03-07 00:58:57.474597 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-07 00:58:57.474607 | orchestrator | Saturday 07 March 2026 00:48:20 +0000 (0:00:01.031) 0:02:05.971 ******** 2026-03-07 00:58:57.474617 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.474627 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.474636 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.474645 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.474655 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.474664 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.474673 | orchestrator | 2026-03-07 00:58:57.474683 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-07 00:58:57.474712 | orchestrator | Saturday 07 March 2026 00:48:21 +0000 (0:00:00.781) 0:02:06.752 ******** 2026-03-07 00:58:57.474723 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.474733 | orchestrator | 2026-03-07 00:58:57.474743 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-07 00:58:57.474753 | orchestrator | Saturday 07 March 2026 00:48:23 +0000 (0:00:01.519) 0:02:08.272 ******** 2026-03-07 00:58:57.474762 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.474772 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.474781 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.474791 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.474801 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.474810 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.474820 | orchestrator | 2026-03-07 00:58:57.474830 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-07 00:58:57.474839 | orchestrator | Saturday 07 March 2026 00:49:18 +0000 (0:00:55.498) 0:03:03.770 ******** 2026-03-07 00:58:57.474870 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:58:57.474881 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:58:57.474891 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:58:57.474900 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.474910 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:58:57.474920 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:58:57.474929 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:58:57.474939 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.474949 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:58:57.474958 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:58:57.474968 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:58:57.474977 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.474987 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:58:57.474997 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:58:57.475006 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:58:57.475016 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:58:57.475025 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.475035 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:58:57.475046 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:58:57.475066 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:58:57.475076 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.475086 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:58:57.475096 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:58:57.475105 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.475115 | orchestrator | 2026-03-07 00:58:57.475125 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-07 00:58:57.475135 | orchestrator | Saturday 07 March 2026 00:49:19 +0000 (0:00:00.978) 0:03:04.749 ******** 2026-03-07 00:58:57.475145 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.475155 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.475164 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.475174 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.475183 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.475193 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.475202 | orchestrator | 2026-03-07 00:58:57.475212 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-07 00:58:57.475222 | orchestrator | Saturday 07 March 2026 00:49:20 +0000 (0:00:01.308) 0:03:06.057 ******** 2026-03-07 00:58:57.475231 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.475241 | orchestrator | 2026-03-07 00:58:57.475251 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-07 00:58:57.475260 | orchestrator | Saturday 07 March 2026 00:49:21 +0000 (0:00:00.197) 0:03:06.255 ******** 2026-03-07 00:58:57.475270 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.475280 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.475290 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.475300 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.475309 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.475319 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.475328 | orchestrator | 2026-03-07 00:58:57.475338 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-07 00:58:57.475348 | orchestrator | Saturday 07 March 2026 00:49:22 +0000 (0:00:01.076) 0:03:07.331 ******** 2026-03-07 00:58:57.475357 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.475367 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.475377 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.475386 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.475395 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.475405 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.475415 | orchestrator | 2026-03-07 00:58:57.475425 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-07 00:58:57.475434 | orchestrator | Saturday 07 March 2026 00:49:23 +0000 (0:00:01.567) 0:03:08.898 ******** 2026-03-07 00:58:57.475444 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.475454 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.475463 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.475482 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.475492 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.475501 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.475511 | orchestrator | 2026-03-07 00:58:57.475521 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-07 00:58:57.475530 | orchestrator | Saturday 07 March 2026 00:49:24 +0000 (0:00:01.210) 0:03:10.109 ******** 2026-03-07 00:58:57.475540 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.475550 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.475559 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.475569 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.475579 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.475588 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.475598 | orchestrator | 2026-03-07 00:58:57.475616 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-07 00:58:57.475626 | orchestrator | Saturday 07 March 2026 00:49:28 +0000 (0:00:03.401) 0:03:13.511 ******** 2026-03-07 00:58:57.475635 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.475645 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.475654 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.475664 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.475673 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.475683 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.475692 | orchestrator | 2026-03-07 00:58:57.475702 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-07 00:58:57.475712 | orchestrator | Saturday 07 March 2026 00:49:29 +0000 (0:00:00.933) 0:03:14.444 ******** 2026-03-07 00:58:57.475723 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.475733 | orchestrator | 2026-03-07 00:58:57.475743 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-07 00:58:57.475867 | orchestrator | Saturday 07 March 2026 00:49:31 +0000 (0:00:02.083) 0:03:16.528 ******** 2026-03-07 00:58:57.475897 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.475907 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.475917 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.475926 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.475936 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.475945 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.475955 | orchestrator | 2026-03-07 00:58:57.475965 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-07 00:58:57.475975 | orchestrator | Saturday 07 March 2026 00:49:32 +0000 (0:00:01.425) 0:03:17.953 ******** 2026-03-07 00:58:57.475984 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.475994 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.476003 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.476013 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.476023 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.476032 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.476042 | orchestrator | 2026-03-07 00:58:57.476052 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-07 00:58:57.476062 | orchestrator | Saturday 07 March 2026 00:49:34 +0000 (0:00:01.230) 0:03:19.183 ******** 2026-03-07 00:58:57.476078 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.476088 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.476098 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.476107 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.476117 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.476126 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.476136 | orchestrator | 2026-03-07 00:58:57.476146 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-07 00:58:57.476156 | orchestrator | Saturday 07 March 2026 00:49:36 +0000 (0:00:01.989) 0:03:21.173 ******** 2026-03-07 00:58:57.476165 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.476175 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.476184 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.476194 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.476204 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.476213 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.476223 | orchestrator | 2026-03-07 00:58:57.476233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-07 00:58:57.476243 | orchestrator | Saturday 07 March 2026 00:49:36 +0000 (0:00:00.722) 0:03:21.895 ******** 2026-03-07 00:58:57.476253 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.476262 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.476272 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.476290 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.476299 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.476309 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.476318 | orchestrator | 2026-03-07 00:58:57.476328 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-07 00:58:57.476338 | orchestrator | Saturday 07 March 2026 00:49:37 +0000 (0:00:01.087) 0:03:22.982 ******** 2026-03-07 00:58:57.476347 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.476357 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.476368 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.476387 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.476405 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.476424 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.476440 | orchestrator | 2026-03-07 00:58:57.476458 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-07 00:58:57.476474 | orchestrator | Saturday 07 March 2026 00:49:38 +0000 (0:00:00.850) 0:03:23.833 ******** 2026-03-07 00:58:57.476492 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.476510 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.476527 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.476545 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.476562 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.476578 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.476588 | orchestrator | 2026-03-07 00:58:57.476598 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-07 00:58:57.476608 | orchestrator | Saturday 07 March 2026 00:49:39 +0000 (0:00:01.284) 0:03:25.118 ******** 2026-03-07 00:58:57.476617 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.476638 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.476649 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.476658 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.476668 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.476677 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.476687 | orchestrator | 2026-03-07 00:58:57.476697 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-07 00:58:57.476706 | orchestrator | Saturday 07 March 2026 00:49:40 +0000 (0:00:00.969) 0:03:26.087 ******** 2026-03-07 00:58:57.476716 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.476726 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.476735 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.476745 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.476754 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.476764 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.476773 | orchestrator | 2026-03-07 00:58:57.476783 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-07 00:58:57.476792 | orchestrator | Saturday 07 March 2026 00:49:42 +0000 (0:00:01.897) 0:03:27.984 ******** 2026-03-07 00:58:57.476803 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.476813 | orchestrator | 2026-03-07 00:58:57.476822 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-07 00:58:57.476832 | orchestrator | Saturday 07 March 2026 00:49:44 +0000 (0:00:01.767) 0:03:29.752 ******** 2026-03-07 00:58:57.476842 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-07 00:58:57.476880 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-07 00:58:57.476898 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-07 00:58:57.476922 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-07 00:58:57.476940 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-07 00:58:57.476956 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-07 00:58:57.476972 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-07 00:58:57.477003 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-07 00:58:57.477021 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-07 00:58:57.477037 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-07 00:58:57.477051 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-07 00:58:57.477060 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-07 00:58:57.477069 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-07 00:58:57.477079 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-07 00:58:57.477089 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-07 00:58:57.477099 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-07 00:58:57.477108 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-07 00:58:57.477125 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-07 00:58:57.477135 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-07 00:58:57.477145 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-07 00:58:57.477154 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-07 00:58:57.477164 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-07 00:58:57.477174 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-07 00:58:57.477184 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-07 00:58:57.477193 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-07 00:58:57.477203 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-07 00:58:57.477212 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-07 00:58:57.477222 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-07 00:58:57.477231 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-07 00:58:57.477241 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-07 00:58:57.477251 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-07 00:58:57.477260 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-07 00:58:57.477270 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-07 00:58:57.477279 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-07 00:58:57.477289 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-07 00:58:57.477298 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:58:57.477308 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:58:57.477317 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-07 00:58:57.477327 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-07 00:58:57.477336 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-07 00:58:57.477346 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-07 00:58:57.477356 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:58:57.477366 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:58:57.477375 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-07 00:58:57.477384 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:58:57.477394 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-07 00:58:57.477404 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:58:57.477423 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:58:57.477433 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:58:57.477443 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:58:57.477477 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-07 00:58:57.477487 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:58:57.477496 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:58:57.477506 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:58:57.477515 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:58:57.477525 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:58:57.477534 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:58:57.477543 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:58:57.477553 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:58:57.477563 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:58:57.477572 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:58:57.477582 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:58:57.477591 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:58:57.477600 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:58:57.477610 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:58:57.477620 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:58:57.477629 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:58:57.477639 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:58:57.477648 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:58:57.477658 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:58:57.477667 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:58:57.477676 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:58:57.477686 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:58:57.477696 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:58:57.477705 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:58:57.477720 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:58:57.477729 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:58:57.477739 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-07 00:58:57.477748 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-07 00:58:57.477758 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:58:57.477768 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:58:57.477777 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:58:57.477787 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:58:57.477796 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-07 00:58:57.477806 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-07 00:58:57.477816 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:58:57.477825 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-07 00:58:57.477835 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:58:57.477918 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-07 00:58:57.477946 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-07 00:58:57.477974 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-07 00:58:57.477990 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-07 00:58:57.478006 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:58:57.478166 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-07 00:58:57.478184 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-07 00:58:57.478198 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-07 00:58:57.478213 | orchestrator | 2026-03-07 00:58:57.478228 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-07 00:58:57.478244 | orchestrator | Saturday 07 March 2026 00:49:52 +0000 (0:00:07.620) 0:03:37.373 ******** 2026-03-07 00:58:57.478259 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.478272 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.478284 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.478297 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.478311 | orchestrator | 2026-03-07 00:58:57.478324 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-07 00:58:57.478337 | orchestrator | Saturday 07 March 2026 00:49:53 +0000 (0:00:01.042) 0:03:38.415 ******** 2026-03-07 00:58:57.478401 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.478419 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.478432 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.478447 | orchestrator | 2026-03-07 00:58:57.478461 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-07 00:58:57.478474 | orchestrator | Saturday 07 March 2026 00:49:54 +0000 (0:00:01.427) 0:03:39.842 ******** 2026-03-07 00:58:57.478488 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.478502 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.478515 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.478528 | orchestrator | 2026-03-07 00:58:57.478540 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-07 00:58:57.478553 | orchestrator | Saturday 07 March 2026 00:49:56 +0000 (0:00:01.967) 0:03:41.810 ******** 2026-03-07 00:58:57.478566 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.478580 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.478593 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.478606 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.478618 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.478630 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.478642 | orchestrator | 2026-03-07 00:58:57.478654 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-07 00:58:57.478667 | orchestrator | Saturday 07 March 2026 00:49:57 +0000 (0:00:01.297) 0:03:43.107 ******** 2026-03-07 00:58:57.478681 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.478694 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.478706 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.478719 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.478731 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.478744 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.478759 | orchestrator | 2026-03-07 00:58:57.478771 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-07 00:58:57.478804 | orchestrator | Saturday 07 March 2026 00:49:59 +0000 (0:00:01.319) 0:03:44.427 ******** 2026-03-07 00:58:57.478818 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.478831 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.478844 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.478886 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.478898 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.478921 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.478933 | orchestrator | 2026-03-07 00:58:57.478946 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-07 00:58:57.478958 | orchestrator | Saturday 07 March 2026 00:50:00 +0000 (0:00:01.038) 0:03:45.466 ******** 2026-03-07 00:58:57.478970 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.478983 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.478995 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.479009 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.479021 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.479034 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.479048 | orchestrator | 2026-03-07 00:58:57.479062 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-07 00:58:57.479075 | orchestrator | Saturday 07 March 2026 00:50:01 +0000 (0:00:01.165) 0:03:46.631 ******** 2026-03-07 00:58:57.479088 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.479101 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.479113 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.479127 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.479141 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.479154 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.479167 | orchestrator | 2026-03-07 00:58:57.479181 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-07 00:58:57.479194 | orchestrator | Saturday 07 March 2026 00:50:02 +0000 (0:00:00.642) 0:03:47.273 ******** 2026-03-07 00:58:57.479206 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.479219 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.479232 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.479244 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.479256 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.479269 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.479282 | orchestrator | 2026-03-07 00:58:57.479296 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-07 00:58:57.479308 | orchestrator | Saturday 07 March 2026 00:50:03 +0000 (0:00:00.899) 0:03:48.172 ******** 2026-03-07 00:58:57.479321 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.479334 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.479348 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.479361 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.479375 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.479388 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.479401 | orchestrator | 2026-03-07 00:58:57.479414 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-07 00:58:57.479428 | orchestrator | Saturday 07 March 2026 00:50:03 +0000 (0:00:00.757) 0:03:48.930 ******** 2026-03-07 00:58:57.479441 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.479456 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.479468 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.479481 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.479569 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.479587 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.479601 | orchestrator | 2026-03-07 00:58:57.479614 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-07 00:58:57.479627 | orchestrator | Saturday 07 March 2026 00:50:04 +0000 (0:00:01.096) 0:03:50.027 ******** 2026-03-07 00:58:57.479654 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.479662 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.479670 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.479678 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.479687 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.479695 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.479702 | orchestrator | 2026-03-07 00:58:57.479710 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-07 00:58:57.479718 | orchestrator | Saturday 07 March 2026 00:50:08 +0000 (0:00:03.300) 0:03:53.328 ******** 2026-03-07 00:58:57.479726 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.479734 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.479742 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.479749 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.479757 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.479765 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.479773 | orchestrator | 2026-03-07 00:58:57.479781 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-07 00:58:57.479789 | orchestrator | Saturday 07 March 2026 00:50:09 +0000 (0:00:01.032) 0:03:54.360 ******** 2026-03-07 00:58:57.479797 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.479804 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.479812 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.479820 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.479828 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.479836 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.479843 | orchestrator | 2026-03-07 00:58:57.479872 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-07 00:58:57.479880 | orchestrator | Saturday 07 March 2026 00:50:09 +0000 (0:00:00.691) 0:03:55.052 ******** 2026-03-07 00:58:57.479888 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.479896 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.479907 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.479921 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.479942 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.479957 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.479971 | orchestrator | 2026-03-07 00:58:57.479986 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-07 00:58:57.480001 | orchestrator | Saturday 07 March 2026 00:50:11 +0000 (0:00:01.393) 0:03:56.446 ******** 2026-03-07 00:58:57.480015 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.480031 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.480048 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.480056 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.480064 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.480072 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.480080 | orchestrator | 2026-03-07 00:58:57.480088 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-07 00:58:57.480096 | orchestrator | Saturday 07 March 2026 00:50:12 +0000 (0:00:00.911) 0:03:57.357 ******** 2026-03-07 00:58:57.480106 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-07 00:58:57.480118 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-07 00:58:57.480136 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.480145 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-07 00:58:57.480153 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-07 00:58:57.480161 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.480207 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-07 00:58:57.480217 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-07 00:58:57.480225 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.480233 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.480241 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.480249 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.480257 | orchestrator | 2026-03-07 00:58:57.480265 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-07 00:58:57.480273 | orchestrator | Saturday 07 March 2026 00:50:13 +0000 (0:00:01.249) 0:03:58.607 ******** 2026-03-07 00:58:57.480281 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.480288 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.480296 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.480304 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.480312 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.480319 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.480327 | orchestrator | 2026-03-07 00:58:57.480335 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-07 00:58:57.480343 | orchestrator | Saturday 07 March 2026 00:50:14 +0000 (0:00:00.934) 0:03:59.542 ******** 2026-03-07 00:58:57.480351 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.480359 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.480367 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.480374 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.480382 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.480390 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.480397 | orchestrator | 2026-03-07 00:58:57.480405 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-07 00:58:57.480414 | orchestrator | Saturday 07 March 2026 00:50:15 +0000 (0:00:01.321) 0:04:00.863 ******** 2026-03-07 00:58:57.480421 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.480429 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.480437 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.480444 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.480452 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.480460 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.480468 | orchestrator | 2026-03-07 00:58:57.480475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-07 00:58:57.480484 | orchestrator | Saturday 07 March 2026 00:50:16 +0000 (0:00:00.866) 0:04:01.730 ******** 2026-03-07 00:58:57.480499 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.480507 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.480515 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.480523 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.480531 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.480538 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.480546 | orchestrator | 2026-03-07 00:58:57.480564 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-07 00:58:57.480572 | orchestrator | Saturday 07 March 2026 00:50:17 +0000 (0:00:01.137) 0:04:02.867 ******** 2026-03-07 00:58:57.480579 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.480587 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.480595 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.480603 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.480610 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.480618 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.480626 | orchestrator | 2026-03-07 00:58:57.480634 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-07 00:58:57.480642 | orchestrator | Saturday 07 March 2026 00:50:18 +0000 (0:00:00.785) 0:04:03.653 ******** 2026-03-07 00:58:57.480649 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.480657 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.480665 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.480673 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.480681 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.480689 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.480697 | orchestrator | 2026-03-07 00:58:57.480705 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-07 00:58:57.480713 | orchestrator | Saturday 07 March 2026 00:50:19 +0000 (0:00:01.104) 0:04:04.758 ******** 2026-03-07 00:58:57.480721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.480729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.480737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.480744 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.480752 | orchestrator | 2026-03-07 00:58:57.480760 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-07 00:58:57.480768 | orchestrator | Saturday 07 March 2026 00:50:20 +0000 (0:00:00.562) 0:04:05.320 ******** 2026-03-07 00:58:57.480776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.480784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.480792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.480799 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.480807 | orchestrator | 2026-03-07 00:58:57.480815 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-07 00:58:57.480823 | orchestrator | Saturday 07 March 2026 00:50:20 +0000 (0:00:00.485) 0:04:05.806 ******** 2026-03-07 00:58:57.480831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.480839 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.480899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.480909 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.480917 | orchestrator | 2026-03-07 00:58:57.480952 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-07 00:58:57.480961 | orchestrator | Saturday 07 March 2026 00:50:21 +0000 (0:00:00.413) 0:04:06.219 ******** 2026-03-07 00:58:57.480969 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.480977 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.480985 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.480993 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.481000 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.481015 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.481023 | orchestrator | 2026-03-07 00:58:57.481031 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-07 00:58:57.481039 | orchestrator | Saturday 07 March 2026 00:50:21 +0000 (0:00:00.659) 0:04:06.879 ******** 2026-03-07 00:58:57.481047 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-07 00:58:57.481054 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-07 00:58:57.481062 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-07 00:58:57.481070 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-07 00:58:57.481078 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.481086 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-07 00:58:57.481094 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.481101 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-07 00:58:57.481109 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.481119 | orchestrator | 2026-03-07 00:58:57.481132 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-07 00:58:57.481145 | orchestrator | Saturday 07 March 2026 00:50:24 +0000 (0:00:02.486) 0:04:09.365 ******** 2026-03-07 00:58:57.481157 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.481170 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.481182 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.481194 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.481207 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.481219 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.481231 | orchestrator | 2026-03-07 00:58:57.481245 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:58:57.481258 | orchestrator | Saturday 07 March 2026 00:50:27 +0000 (0:00:03.622) 0:04:12.988 ******** 2026-03-07 00:58:57.481271 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.481285 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.481298 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.481309 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.481320 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.481330 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.481337 | orchestrator | 2026-03-07 00:58:57.481343 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-07 00:58:57.481350 | orchestrator | Saturday 07 March 2026 00:50:29 +0000 (0:00:01.277) 0:04:14.265 ******** 2026-03-07 00:58:57.481357 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.481363 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.481370 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.481377 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.481384 | orchestrator | 2026-03-07 00:58:57.481397 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-07 00:58:57.481404 | orchestrator | Saturday 07 March 2026 00:50:30 +0000 (0:00:01.287) 0:04:15.553 ******** 2026-03-07 00:58:57.481410 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.481417 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.481423 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.481430 | orchestrator | 2026-03-07 00:58:57.481437 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-07 00:58:57.481444 | orchestrator | Saturday 07 March 2026 00:50:30 +0000 (0:00:00.378) 0:04:15.931 ******** 2026-03-07 00:58:57.481451 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.481457 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.481464 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.481470 | orchestrator | 2026-03-07 00:58:57.481477 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-07 00:58:57.481484 | orchestrator | Saturday 07 March 2026 00:50:32 +0000 (0:00:01.600) 0:04:17.532 ******** 2026-03-07 00:58:57.481490 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:58:57.481503 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:58:57.481510 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:58:57.481516 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.481523 | orchestrator | 2026-03-07 00:58:57.481529 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-07 00:58:57.481536 | orchestrator | Saturday 07 March 2026 00:50:33 +0000 (0:00:00.827) 0:04:18.360 ******** 2026-03-07 00:58:57.481543 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.481549 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.481556 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.481563 | orchestrator | 2026-03-07 00:58:57.481569 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-07 00:58:57.481576 | orchestrator | Saturday 07 March 2026 00:50:33 +0000 (0:00:00.408) 0:04:18.768 ******** 2026-03-07 00:58:57.481582 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.481589 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.481596 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.481602 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-03-07 00:58:57.481609 | orchestrator | 2026-03-07 00:58:57.481616 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-07 00:58:57.481623 | orchestrator | Saturday 07 March 2026 00:50:34 +0000 (0:00:01.340) 0:04:20.109 ******** 2026-03-07 00:58:57.481629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.481636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.481643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.481649 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.481656 | orchestrator | 2026-03-07 00:58:57.481690 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-07 00:58:57.481698 | orchestrator | Saturday 07 March 2026 00:50:35 +0000 (0:00:00.709) 0:04:20.818 ******** 2026-03-07 00:58:57.481704 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.481711 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.481718 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.481724 | orchestrator | 2026-03-07 00:58:57.481731 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-07 00:58:57.481737 | orchestrator | Saturday 07 March 2026 00:50:36 +0000 (0:00:00.509) 0:04:21.328 ******** 2026-03-07 00:58:57.481744 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.481750 | orchestrator | 2026-03-07 00:58:57.481757 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-07 00:58:57.481763 | orchestrator | Saturday 07 March 2026 00:50:36 +0000 (0:00:00.273) 0:04:21.601 ******** 2026-03-07 00:58:57.481770 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.481776 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.481783 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.481790 | orchestrator | 2026-03-07 00:58:57.481796 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-07 00:58:57.481803 | orchestrator | Saturday 07 March 2026 00:50:36 +0000 (0:00:00.407) 0:04:22.009 ******** 2026-03-07 00:58:57.481809 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.481816 | orchestrator | 2026-03-07 00:58:57.481823 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-07 00:58:57.481829 | orchestrator | Saturday 07 March 2026 00:50:37 +0000 (0:00:00.306) 0:04:22.316 ******** 2026-03-07 00:58:57.481836 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.481842 | orchestrator | 2026-03-07 00:58:57.481868 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-07 00:58:57.481875 | orchestrator | Saturday 07 March 2026 00:50:37 +0000 (0:00:00.336) 0:04:22.652 ******** 2026-03-07 00:58:57.481882 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.481889 | orchestrator | 2026-03-07 00:58:57.481902 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-07 00:58:57.481908 | orchestrator | Saturday 07 March 2026 00:50:37 +0000 (0:00:00.140) 0:04:22.793 ******** 2026-03-07 00:58:57.481915 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.481922 | orchestrator | 2026-03-07 00:58:57.481929 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-07 00:58:57.481935 | orchestrator | Saturday 07 March 2026 00:50:38 +0000 (0:00:01.043) 0:04:23.837 ******** 2026-03-07 00:58:57.481942 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.481948 | orchestrator | 2026-03-07 00:58:57.481955 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-07 00:58:57.481962 | orchestrator | Saturday 07 March 2026 00:50:38 +0000 (0:00:00.285) 0:04:24.122 ******** 2026-03-07 00:58:57.481969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.481975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.481982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.481988 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.481995 | orchestrator | 2026-03-07 00:58:57.482006 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-07 00:58:57.482038 | orchestrator | Saturday 07 March 2026 00:50:39 +0000 (0:00:00.606) 0:04:24.728 ******** 2026-03-07 00:58:57.482046 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.482053 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.482060 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.482068 | orchestrator | 2026-03-07 00:58:57.482075 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-07 00:58:57.482082 | orchestrator | Saturday 07 March 2026 00:50:39 +0000 (0:00:00.410) 0:04:25.139 ******** 2026-03-07 00:58:57.482088 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.482095 | orchestrator | 2026-03-07 00:58:57.482101 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-07 00:58:57.482108 | orchestrator | Saturday 07 March 2026 00:50:40 +0000 (0:00:00.239) 0:04:25.379 ******** 2026-03-07 00:58:57.482115 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.482121 | orchestrator | 2026-03-07 00:58:57.482128 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-07 00:58:57.482134 | orchestrator | Saturday 07 March 2026 00:50:40 +0000 (0:00:00.334) 0:04:25.713 ******** 2026-03-07 00:58:57.482141 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.482148 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.482154 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.482161 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.482167 | orchestrator | 2026-03-07 00:58:57.482174 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-07 00:58:57.482181 | orchestrator | Saturday 07 March 2026 00:50:41 +0000 (0:00:01.408) 0:04:27.121 ******** 2026-03-07 00:58:57.482187 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.482194 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.482201 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.482207 | orchestrator | 2026-03-07 00:58:57.482214 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-07 00:58:57.482221 | orchestrator | Saturday 07 March 2026 00:50:42 +0000 (0:00:00.420) 0:04:27.542 ******** 2026-03-07 00:58:57.482227 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.482234 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.482241 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.482247 | orchestrator | 2026-03-07 00:58:57.482254 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-07 00:58:57.482260 | orchestrator | Saturday 07 March 2026 00:50:43 +0000 (0:00:01.529) 0:04:29.071 ******** 2026-03-07 00:58:57.482267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.482284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.482291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.482298 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.482304 | orchestrator | 2026-03-07 00:58:57.482336 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-07 00:58:57.482344 | orchestrator | Saturday 07 March 2026 00:50:44 +0000 (0:00:01.048) 0:04:30.120 ******** 2026-03-07 00:58:57.482351 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.482358 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.482364 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.482371 | orchestrator | 2026-03-07 00:58:57.482377 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-07 00:58:57.482384 | orchestrator | Saturday 07 March 2026 00:50:45 +0000 (0:00:00.819) 0:04:30.939 ******** 2026-03-07 00:58:57.482391 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.482398 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.482404 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.482411 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.482418 | orchestrator | 2026-03-07 00:58:57.482424 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-07 00:58:57.482431 | orchestrator | Saturday 07 March 2026 00:50:46 +0000 (0:00:00.943) 0:04:31.883 ******** 2026-03-07 00:58:57.482438 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.482444 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.482451 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.482457 | orchestrator | 2026-03-07 00:58:57.482464 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-07 00:58:57.482470 | orchestrator | Saturday 07 March 2026 00:50:47 +0000 (0:00:00.809) 0:04:32.692 ******** 2026-03-07 00:58:57.482477 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.482484 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.482490 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.482497 | orchestrator | 2026-03-07 00:58:57.482504 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-07 00:58:57.482510 | orchestrator | Saturday 07 March 2026 00:50:49 +0000 (0:00:01.687) 0:04:34.380 ******** 2026-03-07 00:58:57.482517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.482524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.482531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.482537 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.482544 | orchestrator | 2026-03-07 00:58:57.482551 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-07 00:58:57.482557 | orchestrator | Saturday 07 March 2026 00:50:50 +0000 (0:00:01.512) 0:04:35.892 ******** 2026-03-07 00:58:57.482564 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.482570 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.482577 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.482583 | orchestrator | 2026-03-07 00:58:57.482590 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-07 00:58:57.482597 | orchestrator | Saturday 07 March 2026 00:50:51 +0000 (0:00:00.723) 0:04:36.616 ******** 2026-03-07 00:58:57.482603 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.482610 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.482621 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.482628 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.482635 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.482641 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.482648 | orchestrator | 2026-03-07 00:58:57.482654 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-07 00:58:57.482661 | orchestrator | Saturday 07 March 2026 00:50:53 +0000 (0:00:01.744) 0:04:38.360 ******** 2026-03-07 00:58:57.482677 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.482684 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.482690 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.482697 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-1, testbed-node-2, testbed-node-0 2026-03-07 00:58:57.482703 | orchestrator | 2026-03-07 00:58:57.482710 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-07 00:58:57.482717 | orchestrator | Saturday 07 March 2026 00:50:54 +0000 (0:00:01.378) 0:04:39.739 ******** 2026-03-07 00:58:57.482723 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.482730 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.482737 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.482743 | orchestrator | 2026-03-07 00:58:57.482750 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-07 00:58:57.482756 | orchestrator | Saturday 07 March 2026 00:50:55 +0000 (0:00:00.715) 0:04:40.455 ******** 2026-03-07 00:58:57.482763 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.482769 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.482776 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.482782 | orchestrator | 2026-03-07 00:58:57.482789 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-07 00:58:57.482796 | orchestrator | Saturday 07 March 2026 00:50:56 +0000 (0:00:01.588) 0:04:42.043 ******** 2026-03-07 00:58:57.482802 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:58:57.482809 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:58:57.482816 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:58:57.482822 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.482829 | orchestrator | 2026-03-07 00:58:57.482835 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-07 00:58:57.482842 | orchestrator | Saturday 07 March 2026 00:50:57 +0000 (0:00:00.876) 0:04:42.919 ******** 2026-03-07 00:58:57.482864 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.482871 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.482877 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.482884 | orchestrator | 2026-03-07 00:58:57.482891 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-07 00:58:57.482898 | orchestrator | 2026-03-07 00:58:57.482904 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:58:57.482911 | orchestrator | Saturday 07 March 2026 00:50:58 +0000 (0:00:00.983) 0:04:43.903 ******** 2026-03-07 00:58:57.482940 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.482947 | orchestrator | 2026-03-07 00:58:57.482954 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:58:57.482960 | orchestrator | Saturday 07 March 2026 00:50:59 +0000 (0:00:00.648) 0:04:44.552 ******** 2026-03-07 00:58:57.482967 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.482974 | orchestrator | 2026-03-07 00:58:57.482981 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:58:57.482987 | orchestrator | Saturday 07 March 2026 00:51:00 +0000 (0:00:00.686) 0:04:45.238 ******** 2026-03-07 00:58:57.482994 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.483000 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.483007 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.483013 | orchestrator | 2026-03-07 00:58:57.483020 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:58:57.483027 | orchestrator | Saturday 07 March 2026 00:51:01 +0000 (0:00:01.290) 0:04:46.529 ******** 2026-03-07 00:58:57.483033 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483040 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.483046 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.483059 | orchestrator | 2026-03-07 00:58:57.483066 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:58:57.483072 | orchestrator | Saturday 07 March 2026 00:51:01 +0000 (0:00:00.335) 0:04:46.865 ******** 2026-03-07 00:58:57.483079 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483085 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.483092 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.483098 | orchestrator | 2026-03-07 00:58:57.483105 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:58:57.483111 | orchestrator | Saturday 07 March 2026 00:51:02 +0000 (0:00:00.381) 0:04:47.247 ******** 2026-03-07 00:58:57.483118 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483124 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.483131 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.483137 | orchestrator | 2026-03-07 00:58:57.483144 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:58:57.483151 | orchestrator | Saturday 07 March 2026 00:51:02 +0000 (0:00:00.367) 0:04:47.615 ******** 2026-03-07 00:58:57.483157 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.483164 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.483171 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.483177 | orchestrator | 2026-03-07 00:58:57.483184 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:58:57.483190 | orchestrator | Saturday 07 March 2026 00:51:03 +0000 (0:00:01.237) 0:04:48.852 ******** 2026-03-07 00:58:57.483197 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483204 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.483210 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.483217 | orchestrator | 2026-03-07 00:58:57.483224 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:58:57.483235 | orchestrator | Saturday 07 March 2026 00:51:04 +0000 (0:00:00.429) 0:04:49.282 ******** 2026-03-07 00:58:57.483242 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483248 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.483255 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.483261 | orchestrator | 2026-03-07 00:58:57.483268 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:58:57.483274 | orchestrator | Saturday 07 March 2026 00:51:04 +0000 (0:00:00.455) 0:04:49.737 ******** 2026-03-07 00:58:57.483281 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.483287 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.483294 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.483300 | orchestrator | 2026-03-07 00:58:57.483307 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:58:57.483313 | orchestrator | Saturday 07 March 2026 00:51:05 +0000 (0:00:00.920) 0:04:50.657 ******** 2026-03-07 00:58:57.483320 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.483326 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.483333 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.483339 | orchestrator | 2026-03-07 00:58:57.483346 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:58:57.483353 | orchestrator | Saturday 07 March 2026 00:51:07 +0000 (0:00:01.962) 0:04:52.620 ******** 2026-03-07 00:58:57.483359 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483366 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.483372 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.483379 | orchestrator | 2026-03-07 00:58:57.483385 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:58:57.483392 | orchestrator | Saturday 07 March 2026 00:51:08 +0000 (0:00:01.061) 0:04:53.682 ******** 2026-03-07 00:58:57.483398 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.483405 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.483411 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.483418 | orchestrator | 2026-03-07 00:58:57.483425 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:58:57.483436 | orchestrator | Saturday 07 March 2026 00:51:09 +0000 (0:00:01.101) 0:04:54.784 ******** 2026-03-07 00:58:57.483443 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483449 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.483456 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.483462 | orchestrator | 2026-03-07 00:58:57.483469 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:58:57.483475 | orchestrator | Saturday 07 March 2026 00:51:10 +0000 (0:00:00.526) 0:04:55.310 ******** 2026-03-07 00:58:57.483482 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483488 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.483495 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.483501 | orchestrator | 2026-03-07 00:58:57.483508 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:58:57.483515 | orchestrator | Saturday 07 March 2026 00:51:11 +0000 (0:00:01.316) 0:04:56.627 ******** 2026-03-07 00:58:57.483521 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483549 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.483556 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.483563 | orchestrator | 2026-03-07 00:58:57.483569 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:58:57.483576 | orchestrator | Saturday 07 March 2026 00:51:12 +0000 (0:00:01.146) 0:04:57.773 ******** 2026-03-07 00:58:57.483583 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483589 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.483596 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.483602 | orchestrator | 2026-03-07 00:58:57.483609 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:58:57.483615 | orchestrator | Saturday 07 March 2026 00:51:13 +0000 (0:00:00.473) 0:04:58.247 ******** 2026-03-07 00:58:57.483622 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483628 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.483635 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.483642 | orchestrator | 2026-03-07 00:58:57.483648 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:58:57.483655 | orchestrator | Saturday 07 March 2026 00:51:13 +0000 (0:00:00.566) 0:04:58.813 ******** 2026-03-07 00:58:57.483661 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.483668 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.483674 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.483681 | orchestrator | 2026-03-07 00:58:57.483688 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:58:57.483694 | orchestrator | Saturday 07 March 2026 00:51:14 +0000 (0:00:00.947) 0:04:59.760 ******** 2026-03-07 00:58:57.483701 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.483707 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.483714 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.483721 | orchestrator | 2026-03-07 00:58:57.483727 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:58:57.483734 | orchestrator | Saturday 07 March 2026 00:51:15 +0000 (0:00:01.389) 0:05:01.150 ******** 2026-03-07 00:58:57.483741 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.483747 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.483754 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.483760 | orchestrator | 2026-03-07 00:58:57.483767 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-07 00:58:57.483773 | orchestrator | Saturday 07 March 2026 00:51:17 +0000 (0:00:01.152) 0:05:02.303 ******** 2026-03-07 00:58:57.483780 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.483787 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.483793 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.483799 | orchestrator | 2026-03-07 00:58:57.483806 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-07 00:58:57.483813 | orchestrator | Saturday 07 March 2026 00:51:17 +0000 (0:00:00.532) 0:05:02.835 ******** 2026-03-07 00:58:57.483826 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.483833 | orchestrator | 2026-03-07 00:58:57.483839 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-07 00:58:57.483865 | orchestrator | Saturday 07 March 2026 00:51:19 +0000 (0:00:01.370) 0:05:04.206 ******** 2026-03-07 00:58:57.483873 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.483879 | orchestrator | 2026-03-07 00:58:57.483886 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-07 00:58:57.483892 | orchestrator | Saturday 07 March 2026 00:51:19 +0000 (0:00:00.229) 0:05:04.435 ******** 2026-03-07 00:58:57.483899 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-07 00:58:57.483905 | orchestrator | 2026-03-07 00:58:57.483912 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-07 00:58:57.483918 | orchestrator | Saturday 07 March 2026 00:51:20 +0000 (0:00:01.269) 0:05:05.704 ******** 2026-03-07 00:58:57.483925 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.483931 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.483938 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.483944 | orchestrator | 2026-03-07 00:58:57.483951 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-07 00:58:57.483957 | orchestrator | Saturday 07 March 2026 00:51:20 +0000 (0:00:00.444) 0:05:06.149 ******** 2026-03-07 00:58:57.483964 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.483970 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.483977 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.483984 | orchestrator | 2026-03-07 00:58:57.483990 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-07 00:58:57.483997 | orchestrator | Saturday 07 March 2026 00:51:21 +0000 (0:00:00.725) 0:05:06.874 ******** 2026-03-07 00:58:57.484003 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.484010 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.484016 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.484023 | orchestrator | 2026-03-07 00:58:57.484029 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-07 00:58:57.484036 | orchestrator | Saturday 07 March 2026 00:51:23 +0000 (0:00:01.432) 0:05:08.306 ******** 2026-03-07 00:58:57.484042 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.484049 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.484055 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.484062 | orchestrator | 2026-03-07 00:58:57.484068 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-07 00:58:57.484075 | orchestrator | Saturday 07 March 2026 00:51:24 +0000 (0:00:01.468) 0:05:09.775 ******** 2026-03-07 00:58:57.484081 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.484088 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.484094 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.484101 | orchestrator | 2026-03-07 00:58:57.484108 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-07 00:58:57.484114 | orchestrator | Saturday 07 March 2026 00:51:25 +0000 (0:00:01.311) 0:05:11.086 ******** 2026-03-07 00:58:57.484120 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.484127 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.484133 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.484140 | orchestrator | 2026-03-07 00:58:57.484146 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-07 00:58:57.484176 | orchestrator | Saturday 07 March 2026 00:51:27 +0000 (0:00:01.808) 0:05:12.895 ******** 2026-03-07 00:58:57.484184 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.484190 | orchestrator | 2026-03-07 00:58:57.484197 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-07 00:58:57.484204 | orchestrator | Saturday 07 March 2026 00:51:30 +0000 (0:00:02.303) 0:05:15.198 ******** 2026-03-07 00:58:57.484210 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.484222 | orchestrator | 2026-03-07 00:58:57.484229 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-07 00:58:57.484236 | orchestrator | Saturday 07 March 2026 00:51:30 +0000 (0:00:00.880) 0:05:16.079 ******** 2026-03-07 00:58:57.484243 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.484249 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-07 00:58:57.484256 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.484263 | orchestrator | changed: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 00:58:57.484270 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 00:58:57.484276 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-07 00:58:57.484283 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-07 00:58:57.484290 | orchestrator | changed: [testbed-node-2 -> {{ item }}] 2026-03-07 00:58:57.484296 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 00:58:57.484303 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-07 00:58:57.484310 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 00:58:57.484317 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-07 00:58:57.484324 | orchestrator | 2026-03-07 00:58:57.484331 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-07 00:58:57.484337 | orchestrator | Saturday 07 March 2026 00:51:35 +0000 (0:00:04.133) 0:05:20.213 ******** 2026-03-07 00:58:57.484344 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.484351 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.484357 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.484364 | orchestrator | 2026-03-07 00:58:57.484370 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-07 00:58:57.484377 | orchestrator | Saturday 07 March 2026 00:51:37 +0000 (0:00:02.896) 0:05:23.110 ******** 2026-03-07 00:58:57.484384 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.484391 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.484398 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.484404 | orchestrator | 2026-03-07 00:58:57.484411 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-07 00:58:57.484418 | orchestrator | Saturday 07 March 2026 00:51:38 +0000 (0:00:00.488) 0:05:23.598 ******** 2026-03-07 00:58:57.484425 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.484431 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.484438 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.484444 | orchestrator | 2026-03-07 00:58:57.484451 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-07 00:58:57.484465 | orchestrator | Saturday 07 March 2026 00:51:39 +0000 (0:00:00.711) 0:05:24.309 ******** 2026-03-07 00:58:57.484472 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.484479 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.484486 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.484492 | orchestrator | 2026-03-07 00:58:57.484499 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-07 00:58:57.484505 | orchestrator | Saturday 07 March 2026 00:51:41 +0000 (0:00:02.026) 0:05:26.336 ******** 2026-03-07 00:58:57.484512 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.484519 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.484526 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.484533 | orchestrator | 2026-03-07 00:58:57.484539 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-07 00:58:57.484546 | orchestrator | Saturday 07 March 2026 00:51:42 +0000 (0:00:01.766) 0:05:28.103 ******** 2026-03-07 00:58:57.484553 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.484559 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.484566 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.484572 | orchestrator | 2026-03-07 00:58:57.484584 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-07 00:58:57.484590 | orchestrator | Saturday 07 March 2026 00:51:43 +0000 (0:00:00.508) 0:05:28.611 ******** 2026-03-07 00:58:57.484597 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.484603 | orchestrator | 2026-03-07 00:58:57.484610 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-07 00:58:57.484617 | orchestrator | Saturday 07 March 2026 00:51:44 +0000 (0:00:00.995) 0:05:29.607 ******** 2026-03-07 00:58:57.484623 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.484630 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.484636 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.484643 | orchestrator | 2026-03-07 00:58:57.484650 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-07 00:58:57.484657 | orchestrator | Saturday 07 March 2026 00:51:44 +0000 (0:00:00.347) 0:05:29.954 ******** 2026-03-07 00:58:57.484664 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.484670 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.484677 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.484684 | orchestrator | 2026-03-07 00:58:57.484691 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-07 00:58:57.484698 | orchestrator | Saturday 07 March 2026 00:51:45 +0000 (0:00:00.353) 0:05:30.308 ******** 2026-03-07 00:58:57.484704 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.484711 | orchestrator | 2026-03-07 00:58:57.484718 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-07 00:58:57.484749 | orchestrator | Saturday 07 March 2026 00:51:46 +0000 (0:00:00.995) 0:05:31.304 ******** 2026-03-07 00:58:57.484757 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.484763 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.484770 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.484777 | orchestrator | 2026-03-07 00:58:57.484783 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-07 00:58:57.484790 | orchestrator | Saturday 07 March 2026 00:51:48 +0000 (0:00:02.685) 0:05:33.989 ******** 2026-03-07 00:58:57.484796 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.484803 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.484809 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.484816 | orchestrator | 2026-03-07 00:58:57.484822 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-07 00:58:57.484829 | orchestrator | Saturday 07 March 2026 00:51:50 +0000 (0:00:01.369) 0:05:35.358 ******** 2026-03-07 00:58:57.484835 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.484842 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.484863 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.484870 | orchestrator | 2026-03-07 00:58:57.484877 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-07 00:58:57.484884 | orchestrator | Saturday 07 March 2026 00:51:52 +0000 (0:00:01.848) 0:05:37.207 ******** 2026-03-07 00:58:57.484890 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.484897 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.484903 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.484913 | orchestrator | 2026-03-07 00:58:57.484924 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-07 00:58:57.484935 | orchestrator | Saturday 07 March 2026 00:51:54 +0000 (0:00:02.457) 0:05:39.665 ******** 2026-03-07 00:58:57.484947 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.484957 | orchestrator | 2026-03-07 00:58:57.484968 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-07 00:58:57.484979 | orchestrator | Saturday 07 March 2026 00:51:55 +0000 (0:00:00.609) 0:05:40.275 ******** 2026-03-07 00:58:57.485000 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-07 00:58:57.485011 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.485023 | orchestrator | 2026-03-07 00:58:57.485031 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-07 00:58:57.485038 | orchestrator | Saturday 07 March 2026 00:52:16 +0000 (0:00:21.870) 0:06:02.146 ******** 2026-03-07 00:58:57.485044 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.485051 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.485057 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.485063 | orchestrator | 2026-03-07 00:58:57.485070 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-07 00:58:57.485076 | orchestrator | Saturday 07 March 2026 00:52:26 +0000 (0:00:09.815) 0:06:11.962 ******** 2026-03-07 00:58:57.485083 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.485089 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.485096 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.485102 | orchestrator | 2026-03-07 00:58:57.485114 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-07 00:58:57.485121 | orchestrator | Saturday 07 March 2026 00:52:27 +0000 (0:00:00.972) 0:06:12.935 ******** 2026-03-07 00:58:57.485130 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5956ae4cf79b3095b1d3c66455fdad3507093041'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-07 00:58:57.485140 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5956ae4cf79b3095b1d3c66455fdad3507093041'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-07 00:58:57.485149 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5956ae4cf79b3095b1d3c66455fdad3507093041'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-07 00:58:57.485158 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5956ae4cf79b3095b1d3c66455fdad3507093041'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-07 00:58:57.485195 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5956ae4cf79b3095b1d3c66455fdad3507093041'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-07 00:58:57.485205 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5956ae4cf79b3095b1d3c66455fdad3507093041'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5956ae4cf79b3095b1d3c66455fdad3507093041'}])  2026-03-07 00:58:57.485214 | orchestrator | 2026-03-07 00:58:57.485220 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:58:57.485227 | orchestrator | Saturday 07 March 2026 00:52:42 +0000 (0:00:15.197) 0:06:28.132 ******** 2026-03-07 00:58:57.485239 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.485246 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.485253 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.485259 | orchestrator | 2026-03-07 00:58:57.485266 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-07 00:58:57.485273 | orchestrator | Saturday 07 March 2026 00:52:43 +0000 (0:00:00.333) 0:06:28.465 ******** 2026-03-07 00:58:57.485279 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.485286 | orchestrator | 2026-03-07 00:58:57.485293 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-07 00:58:57.485299 | orchestrator | Saturday 07 March 2026 00:52:44 +0000 (0:00:01.060) 0:06:29.525 ******** 2026-03-07 00:58:57.485306 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.485313 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.485319 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.485326 | orchestrator | 2026-03-07 00:58:57.485332 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-07 00:58:57.485339 | orchestrator | Saturday 07 March 2026 00:52:44 +0000 (0:00:00.490) 0:06:30.016 ******** 2026-03-07 00:58:57.485346 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.485352 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.485359 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.485365 | orchestrator | 2026-03-07 00:58:57.485372 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-07 00:58:57.485379 | orchestrator | Saturday 07 March 2026 00:52:45 +0000 (0:00:00.353) 0:06:30.369 ******** 2026-03-07 00:58:57.485385 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:58:57.485392 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:58:57.485399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:58:57.485405 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.485412 | orchestrator | 2026-03-07 00:58:57.485419 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-07 00:58:57.485430 | orchestrator | Saturday 07 March 2026 00:52:46 +0000 (0:00:01.271) 0:06:31.641 ******** 2026-03-07 00:58:57.485436 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.485443 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.485450 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.485456 | orchestrator | 2026-03-07 00:58:57.485463 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-07 00:58:57.485470 | orchestrator | 2026-03-07 00:58:57.485477 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:58:57.485483 | orchestrator | Saturday 07 March 2026 00:52:47 +0000 (0:00:00.652) 0:06:32.293 ******** 2026-03-07 00:58:57.485490 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.485497 | orchestrator | 2026-03-07 00:58:57.485503 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:58:57.485510 | orchestrator | Saturday 07 March 2026 00:52:47 +0000 (0:00:00.537) 0:06:32.831 ******** 2026-03-07 00:58:57.485517 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.485523 | orchestrator | 2026-03-07 00:58:57.485530 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:58:57.485537 | orchestrator | Saturday 07 March 2026 00:52:48 +0000 (0:00:00.845) 0:06:33.676 ******** 2026-03-07 00:58:57.485543 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.485550 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.485556 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.485563 | orchestrator | 2026-03-07 00:58:57.485570 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:58:57.485582 | orchestrator | Saturday 07 March 2026 00:52:49 +0000 (0:00:00.835) 0:06:34.512 ******** 2026-03-07 00:58:57.485588 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.485595 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.485602 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.485608 | orchestrator | 2026-03-07 00:58:57.485615 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:58:57.485621 | orchestrator | Saturday 07 March 2026 00:52:49 +0000 (0:00:00.378) 0:06:34.890 ******** 2026-03-07 00:58:57.485628 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.485635 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.485641 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.485648 | orchestrator | 2026-03-07 00:58:57.485654 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:58:57.485661 | orchestrator | Saturday 07 March 2026 00:52:50 +0000 (0:00:00.607) 0:06:35.498 ******** 2026-03-07 00:58:57.485668 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.485674 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.485681 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.485688 | orchestrator | 2026-03-07 00:58:57.485694 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:58:57.485725 | orchestrator | Saturday 07 March 2026 00:52:50 +0000 (0:00:00.358) 0:06:35.857 ******** 2026-03-07 00:58:57.485732 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.485739 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.485746 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.485752 | orchestrator | 2026-03-07 00:58:57.485759 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:58:57.485765 | orchestrator | Saturday 07 March 2026 00:52:51 +0000 (0:00:00.905) 0:06:36.762 ******** 2026-03-07 00:58:57.485772 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.485779 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.485786 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.485792 | orchestrator | 2026-03-07 00:58:57.485799 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:58:57.485805 | orchestrator | Saturday 07 March 2026 00:52:51 +0000 (0:00:00.324) 0:06:37.087 ******** 2026-03-07 00:58:57.485812 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.485819 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.485825 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.485832 | orchestrator | 2026-03-07 00:58:57.485839 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:58:57.485883 | orchestrator | Saturday 07 March 2026 00:52:52 +0000 (0:00:00.669) 0:06:37.756 ******** 2026-03-07 00:58:57.485891 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.485898 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.485905 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.485911 | orchestrator | 2026-03-07 00:58:57.485918 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:58:57.485924 | orchestrator | Saturday 07 March 2026 00:52:53 +0000 (0:00:00.821) 0:06:38.578 ******** 2026-03-07 00:58:57.485931 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.485937 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.485944 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.485951 | orchestrator | 2026-03-07 00:58:57.485957 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:58:57.485964 | orchestrator | Saturday 07 March 2026 00:52:54 +0000 (0:00:00.824) 0:06:39.402 ******** 2026-03-07 00:58:57.485970 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.485977 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.485984 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.485990 | orchestrator | 2026-03-07 00:58:57.485997 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:58:57.486003 | orchestrator | Saturday 07 March 2026 00:52:54 +0000 (0:00:00.376) 0:06:39.778 ******** 2026-03-07 00:58:57.486043 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.486050 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.486057 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.486064 | orchestrator | 2026-03-07 00:58:57.486072 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:58:57.486078 | orchestrator | Saturday 07 March 2026 00:52:55 +0000 (0:00:00.665) 0:06:40.443 ******** 2026-03-07 00:58:57.486085 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.486092 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.486098 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.486105 | orchestrator | 2026-03-07 00:58:57.486116 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:58:57.486123 | orchestrator | Saturday 07 March 2026 00:52:55 +0000 (0:00:00.371) 0:06:40.814 ******** 2026-03-07 00:58:57.486129 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.486136 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.486143 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.486149 | orchestrator | 2026-03-07 00:58:57.486155 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:58:57.486162 | orchestrator | Saturday 07 March 2026 00:52:56 +0000 (0:00:00.391) 0:06:41.206 ******** 2026-03-07 00:58:57.486169 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.486175 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.486182 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.486188 | orchestrator | 2026-03-07 00:58:57.486195 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:58:57.486201 | orchestrator | Saturday 07 March 2026 00:52:56 +0000 (0:00:00.330) 0:06:41.536 ******** 2026-03-07 00:58:57.486208 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.486214 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.486220 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.486226 | orchestrator | 2026-03-07 00:58:57.486233 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:58:57.486239 | orchestrator | Saturday 07 March 2026 00:52:56 +0000 (0:00:00.389) 0:06:41.926 ******** 2026-03-07 00:58:57.486245 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.486251 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.486257 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.486263 | orchestrator | 2026-03-07 00:58:57.486269 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:58:57.486275 | orchestrator | Saturday 07 March 2026 00:52:57 +0000 (0:00:00.714) 0:06:42.640 ******** 2026-03-07 00:58:57.486281 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.486288 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.486294 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.486300 | orchestrator | 2026-03-07 00:58:57.486306 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:58:57.486312 | orchestrator | Saturday 07 March 2026 00:52:57 +0000 (0:00:00.491) 0:06:43.132 ******** 2026-03-07 00:58:57.486319 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.486325 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.486331 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.486337 | orchestrator | 2026-03-07 00:58:57.486343 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:58:57.486349 | orchestrator | Saturday 07 March 2026 00:52:58 +0000 (0:00:00.358) 0:06:43.490 ******** 2026-03-07 00:58:57.486355 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.486361 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.486367 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.486374 | orchestrator | 2026-03-07 00:58:57.486380 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-07 00:58:57.486386 | orchestrator | Saturday 07 March 2026 00:52:59 +0000 (0:00:00.896) 0:06:44.387 ******** 2026-03-07 00:58:57.486420 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-07 00:58:57.486432 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:58:57.486439 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:58:57.486445 | orchestrator | 2026-03-07 00:58:57.486451 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-07 00:58:57.486457 | orchestrator | Saturday 07 March 2026 00:52:59 +0000 (0:00:00.765) 0:06:45.152 ******** 2026-03-07 00:58:57.486463 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.486469 | orchestrator | 2026-03-07 00:58:57.486475 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-07 00:58:57.486481 | orchestrator | Saturday 07 March 2026 00:53:00 +0000 (0:00:00.576) 0:06:45.728 ******** 2026-03-07 00:58:57.486488 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.486494 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.486500 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.486506 | orchestrator | 2026-03-07 00:58:57.486512 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-07 00:58:57.486518 | orchestrator | Saturday 07 March 2026 00:53:01 +0000 (0:00:00.708) 0:06:46.437 ******** 2026-03-07 00:58:57.486524 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.486531 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.486537 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.486543 | orchestrator | 2026-03-07 00:58:57.486549 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-07 00:58:57.486555 | orchestrator | Saturday 07 March 2026 00:53:01 +0000 (0:00:00.590) 0:06:47.028 ******** 2026-03-07 00:58:57.486561 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 00:58:57.486567 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 00:58:57.486574 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 00:58:57.486580 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-07 00:58:57.486586 | orchestrator | 2026-03-07 00:58:57.486592 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-07 00:58:57.486598 | orchestrator | Saturday 07 March 2026 00:53:13 +0000 (0:00:11.412) 0:06:58.440 ******** 2026-03-07 00:58:57.486604 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.486610 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.486616 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.486622 | orchestrator | 2026-03-07 00:58:57.486629 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-07 00:58:57.486635 | orchestrator | Saturday 07 March 2026 00:53:13 +0000 (0:00:00.425) 0:06:58.865 ******** 2026-03-07 00:58:57.486641 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-07 00:58:57.486647 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-07 00:58:57.486653 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-07 00:58:57.486659 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-07 00:58:57.486670 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.486676 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.486682 | orchestrator | 2026-03-07 00:58:57.486689 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-07 00:58:57.486695 | orchestrator | Saturday 07 March 2026 00:53:15 +0000 (0:00:02.204) 0:07:01.070 ******** 2026-03-07 00:58:57.486701 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-07 00:58:57.486707 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-07 00:58:57.486713 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-07 00:58:57.486719 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 00:58:57.486725 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-07 00:58:57.486731 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-07 00:58:57.486737 | orchestrator | 2026-03-07 00:58:57.486748 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-07 00:58:57.486754 | orchestrator | Saturday 07 March 2026 00:53:17 +0000 (0:00:01.306) 0:07:02.376 ******** 2026-03-07 00:58:57.486760 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.486767 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.486773 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.486779 | orchestrator | 2026-03-07 00:58:57.486785 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-07 00:58:57.486791 | orchestrator | Saturday 07 March 2026 00:53:18 +0000 (0:00:01.090) 0:07:03.467 ******** 2026-03-07 00:58:57.486797 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.486803 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.486810 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.486816 | orchestrator | 2026-03-07 00:58:57.486822 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-07 00:58:57.486828 | orchestrator | Saturday 07 March 2026 00:53:18 +0000 (0:00:00.355) 0:07:03.822 ******** 2026-03-07 00:58:57.486834 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.486840 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.486862 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.486869 | orchestrator | 2026-03-07 00:58:57.486875 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-07 00:58:57.486881 | orchestrator | Saturday 07 March 2026 00:53:18 +0000 (0:00:00.335) 0:07:04.157 ******** 2026-03-07 00:58:57.486888 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.486894 | orchestrator | 2026-03-07 00:58:57.486900 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-07 00:58:57.486906 | orchestrator | Saturday 07 March 2026 00:53:19 +0000 (0:00:00.795) 0:07:04.952 ******** 2026-03-07 00:58:57.486913 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.486919 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.486925 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.486931 | orchestrator | 2026-03-07 00:58:57.486961 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-07 00:58:57.486968 | orchestrator | Saturday 07 March 2026 00:53:20 +0000 (0:00:00.344) 0:07:05.297 ******** 2026-03-07 00:58:57.486974 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.486980 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.486986 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.486992 | orchestrator | 2026-03-07 00:58:57.486999 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-07 00:58:57.487005 | orchestrator | Saturday 07 March 2026 00:53:20 +0000 (0:00:00.333) 0:07:05.631 ******** 2026-03-07 00:58:57.487011 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-07 00:58:57.487017 | orchestrator | 2026-03-07 00:58:57.487023 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-07 00:58:57.487030 | orchestrator | Saturday 07 March 2026 00:53:21 +0000 (0:00:00.844) 0:07:06.476 ******** 2026-03-07 00:58:57.487036 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.487042 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.487048 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.487054 | orchestrator | 2026-03-07 00:58:57.487061 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-07 00:58:57.487067 | orchestrator | Saturday 07 March 2026 00:53:22 +0000 (0:00:01.345) 0:07:07.821 ******** 2026-03-07 00:58:57.487073 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.487079 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.487085 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.487092 | orchestrator | 2026-03-07 00:58:57.487098 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-07 00:58:57.487104 | orchestrator | Saturday 07 March 2026 00:53:23 +0000 (0:00:01.257) 0:07:09.078 ******** 2026-03-07 00:58:57.487117 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.487123 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.487129 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.487135 | orchestrator | 2026-03-07 00:58:57.487141 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-07 00:58:57.487147 | orchestrator | Saturday 07 March 2026 00:53:25 +0000 (0:00:01.999) 0:07:11.078 ******** 2026-03-07 00:58:57.487153 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.487159 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.487165 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.487172 | orchestrator | 2026-03-07 00:58:57.487178 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-07 00:58:57.487184 | orchestrator | Saturday 07 March 2026 00:53:28 +0000 (0:00:02.335) 0:07:13.413 ******** 2026-03-07 00:58:57.487190 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.487196 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.487203 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-07 00:58:57.487209 | orchestrator | 2026-03-07 00:58:57.487215 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-07 00:58:57.487225 | orchestrator | Saturday 07 March 2026 00:53:28 +0000 (0:00:00.464) 0:07:13.878 ******** 2026-03-07 00:58:57.487231 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-07 00:58:57.487238 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-07 00:58:57.487244 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-07 00:58:57.487250 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-07 00:58:57.487256 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-07 00:58:57.487262 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:58:57.487268 | orchestrator | 2026-03-07 00:58:57.487275 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-07 00:58:57.487281 | orchestrator | Saturday 07 March 2026 00:53:58 +0000 (0:00:30.073) 0:07:43.952 ******** 2026-03-07 00:58:57.487287 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:58:57.487293 | orchestrator | 2026-03-07 00:58:57.487299 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-07 00:58:57.487305 | orchestrator | Saturday 07 March 2026 00:54:00 +0000 (0:00:01.323) 0:07:45.275 ******** 2026-03-07 00:58:57.487312 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.487318 | orchestrator | 2026-03-07 00:58:57.487324 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-07 00:58:57.487330 | orchestrator | Saturday 07 March 2026 00:54:00 +0000 (0:00:00.393) 0:07:45.668 ******** 2026-03-07 00:58:57.487336 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.487342 | orchestrator | 2026-03-07 00:58:57.487348 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-07 00:58:57.487355 | orchestrator | Saturday 07 March 2026 00:54:00 +0000 (0:00:00.120) 0:07:45.789 ******** 2026-03-07 00:58:57.487361 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-07 00:58:57.487367 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-07 00:58:57.487373 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-07 00:58:57.487379 | orchestrator | 2026-03-07 00:58:57.487385 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-07 00:58:57.487391 | orchestrator | Saturday 07 March 2026 00:54:07 +0000 (0:00:06.568) 0:07:52.358 ******** 2026-03-07 00:58:57.487398 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-07 00:58:57.487433 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-07 00:58:57.487440 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-07 00:58:57.487447 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-07 00:58:57.487453 | orchestrator | 2026-03-07 00:58:57.487459 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:58:57.487465 | orchestrator | Saturday 07 March 2026 00:54:12 +0000 (0:00:05.105) 0:07:57.463 ******** 2026-03-07 00:58:57.487471 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.487478 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.487484 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.487490 | orchestrator | 2026-03-07 00:58:57.487496 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-07 00:58:57.487502 | orchestrator | Saturday 07 March 2026 00:54:12 +0000 (0:00:00.680) 0:07:58.143 ******** 2026-03-07 00:58:57.487509 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.487515 | orchestrator | 2026-03-07 00:58:57.487521 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-07 00:58:57.487527 | orchestrator | Saturday 07 March 2026 00:54:13 +0000 (0:00:00.896) 0:07:59.040 ******** 2026-03-07 00:58:57.487534 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.487540 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.487546 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.487552 | orchestrator | 2026-03-07 00:58:57.487558 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-07 00:58:57.487564 | orchestrator | Saturday 07 March 2026 00:54:14 +0000 (0:00:00.413) 0:07:59.453 ******** 2026-03-07 00:58:57.487571 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.487577 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.487583 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.487589 | orchestrator | 2026-03-07 00:58:57.487595 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-07 00:58:57.487601 | orchestrator | Saturday 07 March 2026 00:54:15 +0000 (0:00:01.317) 0:08:00.771 ******** 2026-03-07 00:58:57.487608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:58:57.487614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:58:57.487620 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:58:57.487626 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.487632 | orchestrator | 2026-03-07 00:58:57.487638 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-07 00:58:57.487644 | orchestrator | Saturday 07 March 2026 00:54:16 +0000 (0:00:01.044) 0:08:01.815 ******** 2026-03-07 00:58:57.487650 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.487657 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.487663 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.487669 | orchestrator | 2026-03-07 00:58:57.487675 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-07 00:58:57.487682 | orchestrator | 2026-03-07 00:58:57.487688 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:58:57.487701 | orchestrator | Saturday 07 March 2026 00:54:17 +0000 (0:00:00.743) 0:08:02.559 ******** 2026-03-07 00:58:57.487707 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.487714 | orchestrator | 2026-03-07 00:58:57.487720 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:58:57.487726 | orchestrator | Saturday 07 March 2026 00:54:17 +0000 (0:00:00.522) 0:08:03.081 ******** 2026-03-07 00:58:57.487733 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.487744 | orchestrator | 2026-03-07 00:58:57.487750 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:58:57.487756 | orchestrator | Saturday 07 March 2026 00:54:18 +0000 (0:00:00.761) 0:08:03.842 ******** 2026-03-07 00:58:57.487762 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.487769 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.487775 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.487781 | orchestrator | 2026-03-07 00:58:57.487787 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:58:57.487793 | orchestrator | Saturday 07 March 2026 00:54:18 +0000 (0:00:00.292) 0:08:04.135 ******** 2026-03-07 00:58:57.487799 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.487806 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.487812 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.487818 | orchestrator | 2026-03-07 00:58:57.487824 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:58:57.487830 | orchestrator | Saturday 07 March 2026 00:54:19 +0000 (0:00:00.677) 0:08:04.813 ******** 2026-03-07 00:58:57.487836 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.487842 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.487863 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.487869 | orchestrator | 2026-03-07 00:58:57.487875 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:58:57.487881 | orchestrator | Saturday 07 March 2026 00:54:20 +0000 (0:00:00.757) 0:08:05.571 ******** 2026-03-07 00:58:57.487887 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.487894 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.487900 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.487906 | orchestrator | 2026-03-07 00:58:57.487912 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:58:57.487918 | orchestrator | Saturday 07 March 2026 00:54:21 +0000 (0:00:01.052) 0:08:06.623 ******** 2026-03-07 00:58:57.487925 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.487931 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.487937 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.487943 | orchestrator | 2026-03-07 00:58:57.487949 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:58:57.487956 | orchestrator | Saturday 07 March 2026 00:54:21 +0000 (0:00:00.337) 0:08:06.961 ******** 2026-03-07 00:58:57.487993 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.488008 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.488025 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.488034 | orchestrator | 2026-03-07 00:58:57.488044 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:58:57.488054 | orchestrator | Saturday 07 March 2026 00:54:22 +0000 (0:00:00.371) 0:08:07.332 ******** 2026-03-07 00:58:57.488063 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.488073 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.488082 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.488092 | orchestrator | 2026-03-07 00:58:57.488102 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:58:57.488112 | orchestrator | Saturday 07 March 2026 00:54:22 +0000 (0:00:00.324) 0:08:07.656 ******** 2026-03-07 00:58:57.488121 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.488132 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.488141 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.488151 | orchestrator | 2026-03-07 00:58:57.488161 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:58:57.488172 | orchestrator | Saturday 07 March 2026 00:54:23 +0000 (0:00:01.086) 0:08:08.743 ******** 2026-03-07 00:58:57.488182 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.488193 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.488199 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.488205 | orchestrator | 2026-03-07 00:58:57.488212 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:58:57.488226 | orchestrator | Saturday 07 March 2026 00:54:24 +0000 (0:00:00.794) 0:08:09.538 ******** 2026-03-07 00:58:57.488232 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.488238 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.488244 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.488250 | orchestrator | 2026-03-07 00:58:57.488257 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:58:57.488263 | orchestrator | Saturday 07 March 2026 00:54:24 +0000 (0:00:00.353) 0:08:09.891 ******** 2026-03-07 00:58:57.488269 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.488275 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.488281 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.488287 | orchestrator | 2026-03-07 00:58:57.488293 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:58:57.488299 | orchestrator | Saturday 07 March 2026 00:54:25 +0000 (0:00:00.366) 0:08:10.258 ******** 2026-03-07 00:58:57.488305 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.488311 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.488317 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.488323 | orchestrator | 2026-03-07 00:58:57.488330 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:58:57.488336 | orchestrator | Saturday 07 March 2026 00:54:25 +0000 (0:00:00.683) 0:08:10.941 ******** 2026-03-07 00:58:57.488342 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.488348 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.488354 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.488360 | orchestrator | 2026-03-07 00:58:57.488366 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:58:57.488378 | orchestrator | Saturday 07 March 2026 00:54:26 +0000 (0:00:00.349) 0:08:11.290 ******** 2026-03-07 00:58:57.488384 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.488390 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.488396 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.488402 | orchestrator | 2026-03-07 00:58:57.488408 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:58:57.488414 | orchestrator | Saturday 07 March 2026 00:54:26 +0000 (0:00:00.314) 0:08:11.605 ******** 2026-03-07 00:58:57.488421 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.488427 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.488433 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.488439 | orchestrator | 2026-03-07 00:58:57.488445 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:58:57.488451 | orchestrator | Saturday 07 March 2026 00:54:26 +0000 (0:00:00.268) 0:08:11.874 ******** 2026-03-07 00:58:57.488457 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.488463 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.488470 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.488476 | orchestrator | 2026-03-07 00:58:57.488482 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:58:57.488488 | orchestrator | Saturday 07 March 2026 00:54:27 +0000 (0:00:00.529) 0:08:12.403 ******** 2026-03-07 00:58:57.488494 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.488500 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.488506 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.488512 | orchestrator | 2026-03-07 00:58:57.488519 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:58:57.488525 | orchestrator | Saturday 07 March 2026 00:54:27 +0000 (0:00:00.308) 0:08:12.711 ******** 2026-03-07 00:58:57.488531 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.488537 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.488543 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.488549 | orchestrator | 2026-03-07 00:58:57.488555 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:58:57.488561 | orchestrator | Saturday 07 March 2026 00:54:27 +0000 (0:00:00.348) 0:08:13.060 ******** 2026-03-07 00:58:57.488572 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.488579 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.488585 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.488591 | orchestrator | 2026-03-07 00:58:57.488597 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-07 00:58:57.488603 | orchestrator | Saturday 07 March 2026 00:54:28 +0000 (0:00:00.711) 0:08:13.772 ******** 2026-03-07 00:58:57.488609 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.488615 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.488621 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.488627 | orchestrator | 2026-03-07 00:58:57.488634 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-07 00:58:57.488640 | orchestrator | Saturday 07 March 2026 00:54:28 +0000 (0:00:00.309) 0:08:14.082 ******** 2026-03-07 00:58:57.488646 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 00:58:57.488658 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:58:57.488664 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:58:57.488670 | orchestrator | 2026-03-07 00:58:57.488676 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-07 00:58:57.488683 | orchestrator | Saturday 07 March 2026 00:54:29 +0000 (0:00:00.609) 0:08:14.692 ******** 2026-03-07 00:58:57.488689 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.488695 | orchestrator | 2026-03-07 00:58:57.488701 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-07 00:58:57.488707 | orchestrator | Saturday 07 March 2026 00:54:30 +0000 (0:00:00.508) 0:08:15.200 ******** 2026-03-07 00:58:57.488714 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.488720 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.488726 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.488732 | orchestrator | 2026-03-07 00:58:57.488738 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-07 00:58:57.488744 | orchestrator | Saturday 07 March 2026 00:54:30 +0000 (0:00:00.524) 0:08:15.727 ******** 2026-03-07 00:58:57.488750 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.488756 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.488763 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.488769 | orchestrator | 2026-03-07 00:58:57.488775 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-07 00:58:57.488781 | orchestrator | Saturday 07 March 2026 00:54:30 +0000 (0:00:00.282) 0:08:16.010 ******** 2026-03-07 00:58:57.488787 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.488793 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.488799 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.488805 | orchestrator | 2026-03-07 00:58:57.488812 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-07 00:58:57.488818 | orchestrator | Saturday 07 March 2026 00:54:31 +0000 (0:00:00.657) 0:08:16.668 ******** 2026-03-07 00:58:57.488824 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.488830 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.488836 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.488842 | orchestrator | 2026-03-07 00:58:57.488862 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-07 00:58:57.488868 | orchestrator | Saturday 07 March 2026 00:54:31 +0000 (0:00:00.330) 0:08:16.998 ******** 2026-03-07 00:58:57.488874 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-07 00:58:57.488881 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-07 00:58:57.488887 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-07 00:58:57.488904 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-07 00:58:57.488910 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-07 00:58:57.488916 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-07 00:58:57.488923 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-07 00:58:57.488929 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-07 00:58:57.488935 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-07 00:58:57.488941 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-07 00:58:57.488947 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-07 00:58:57.488953 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-07 00:58:57.488960 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-07 00:58:57.488966 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-07 00:58:57.488972 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-07 00:58:57.488978 | orchestrator | 2026-03-07 00:58:57.488985 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-07 00:58:57.488991 | orchestrator | Saturday 07 March 2026 00:54:35 +0000 (0:00:03.173) 0:08:20.172 ******** 2026-03-07 00:58:57.488998 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.489004 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.489010 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.489016 | orchestrator | 2026-03-07 00:58:57.489023 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-07 00:58:57.489029 | orchestrator | Saturday 07 March 2026 00:54:35 +0000 (0:00:00.358) 0:08:20.530 ******** 2026-03-07 00:58:57.489035 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.489042 | orchestrator | 2026-03-07 00:58:57.489048 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-07 00:58:57.489054 | orchestrator | Saturday 07 March 2026 00:54:35 +0000 (0:00:00.557) 0:08:21.087 ******** 2026-03-07 00:58:57.489060 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-07 00:58:57.489067 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-07 00:58:57.489073 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-07 00:58:57.489079 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-07 00:58:57.489094 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-07 00:58:57.489101 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-07 00:58:57.489107 | orchestrator | 2026-03-07 00:58:57.489113 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-07 00:58:57.489119 | orchestrator | Saturday 07 March 2026 00:54:37 +0000 (0:00:01.380) 0:08:22.468 ******** 2026-03-07 00:58:57.489125 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.489132 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:58:57.489138 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:58:57.489144 | orchestrator | 2026-03-07 00:58:57.489150 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-07 00:58:57.489157 | orchestrator | Saturday 07 March 2026 00:54:39 +0000 (0:00:02.230) 0:08:24.698 ******** 2026-03-07 00:58:57.489163 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 00:58:57.489169 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:58:57.489175 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.489186 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 00:58:57.489192 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-07 00:58:57.489199 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.489205 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 00:58:57.489211 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-07 00:58:57.489217 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.489224 | orchestrator | 2026-03-07 00:58:57.489230 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-07 00:58:57.489236 | orchestrator | Saturday 07 March 2026 00:54:40 +0000 (0:00:01.281) 0:08:25.979 ******** 2026-03-07 00:58:57.489242 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:58:57.489249 | orchestrator | 2026-03-07 00:58:57.489255 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-07 00:58:57.489261 | orchestrator | Saturday 07 March 2026 00:54:42 +0000 (0:00:02.029) 0:08:28.009 ******** 2026-03-07 00:58:57.489267 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.489274 | orchestrator | 2026-03-07 00:58:57.489280 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-07 00:58:57.489286 | orchestrator | Saturday 07 March 2026 00:54:43 +0000 (0:00:00.949) 0:08:28.958 ******** 2026-03-07 00:58:57.489292 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e9f941f3-03bb-56ef-8ac7-c30bc8004c51', 'data_vg': 'ceph-e9f941f3-03bb-56ef-8ac7-c30bc8004c51'}) 2026-03-07 00:58:57.489300 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f3e458ba-b75f-5cb4-a1c9-e61fe3486295', 'data_vg': 'ceph-f3e458ba-b75f-5cb4-a1c9-e61fe3486295'}) 2026-03-07 00:58:57.489310 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c', 'data_vg': 'ceph-c6d853cd-f8df-5f7f-ab25-9ac4f40a4d2c'}) 2026-03-07 00:58:57.489317 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6cee2ec4-9e84-549b-8075-e81043ce518c', 'data_vg': 'ceph-6cee2ec4-9e84-549b-8075-e81043ce518c'}) 2026-03-07 00:58:57.489323 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-50ec861c-6b17-5421-b6cb-257ea2a8b129', 'data_vg': 'ceph-50ec861c-6b17-5421-b6cb-257ea2a8b129'}) 2026-03-07 00:58:57.489329 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5cfbeba1-5550-585b-8a7e-42a4921f8eca', 'data_vg': 'ceph-5cfbeba1-5550-585b-8a7e-42a4921f8eca'}) 2026-03-07 00:58:57.489335 | orchestrator | 2026-03-07 00:58:57.489342 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-07 00:58:57.489348 | orchestrator | Saturday 07 March 2026 00:55:22 +0000 (0:00:39.015) 0:09:07.974 ******** 2026-03-07 00:58:57.489354 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.489360 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.489366 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.489373 | orchestrator | 2026-03-07 00:58:57.489379 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-07 00:58:57.489385 | orchestrator | Saturday 07 March 2026 00:55:23 +0000 (0:00:00.352) 0:09:08.326 ******** 2026-03-07 00:58:57.489391 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.489398 | orchestrator | 2026-03-07 00:58:57.489404 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-07 00:58:57.489410 | orchestrator | Saturday 07 March 2026 00:55:23 +0000 (0:00:00.818) 0:09:09.145 ******** 2026-03-07 00:58:57.489417 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.489423 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.489429 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.489435 | orchestrator | 2026-03-07 00:58:57.489442 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-07 00:58:57.489448 | orchestrator | Saturday 07 March 2026 00:55:24 +0000 (0:00:00.683) 0:09:09.829 ******** 2026-03-07 00:58:57.489459 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.489465 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.489471 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.489477 | orchestrator | 2026-03-07 00:58:57.489484 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-07 00:58:57.489490 | orchestrator | Saturday 07 March 2026 00:55:27 +0000 (0:00:02.687) 0:09:12.517 ******** 2026-03-07 00:58:57.489496 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.489502 | orchestrator | 2026-03-07 00:58:57.489512 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-07 00:58:57.489519 | orchestrator | Saturday 07 March 2026 00:55:28 +0000 (0:00:00.936) 0:09:13.453 ******** 2026-03-07 00:58:57.489525 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.489531 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.489537 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.489544 | orchestrator | 2026-03-07 00:58:57.489550 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-07 00:58:57.489556 | orchestrator | Saturday 07 March 2026 00:55:29 +0000 (0:00:01.228) 0:09:14.682 ******** 2026-03-07 00:58:57.489562 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.489569 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.489575 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.489581 | orchestrator | 2026-03-07 00:58:57.489587 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-07 00:58:57.489593 | orchestrator | Saturday 07 March 2026 00:55:30 +0000 (0:00:01.190) 0:09:15.872 ******** 2026-03-07 00:58:57.489600 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.489606 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.489612 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.489618 | orchestrator | 2026-03-07 00:58:57.489625 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-07 00:58:57.489631 | orchestrator | Saturday 07 March 2026 00:55:32 +0000 (0:00:01.745) 0:09:17.617 ******** 2026-03-07 00:58:57.489637 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.489643 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.489649 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.489656 | orchestrator | 2026-03-07 00:58:57.489662 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-07 00:58:57.489668 | orchestrator | Saturday 07 March 2026 00:55:33 +0000 (0:00:00.685) 0:09:18.303 ******** 2026-03-07 00:58:57.489674 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.489681 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.489687 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.489693 | orchestrator | 2026-03-07 00:58:57.489699 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-07 00:58:57.489706 | orchestrator | Saturday 07 March 2026 00:55:33 +0000 (0:00:00.462) 0:09:18.765 ******** 2026-03-07 00:58:57.489712 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-07 00:58:57.489718 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-07 00:58:57.489724 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-07 00:58:57.489731 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-07 00:58:57.489737 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-07 00:58:57.489743 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-07 00:58:57.489749 | orchestrator | 2026-03-07 00:58:57.489755 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-07 00:58:57.489762 | orchestrator | Saturday 07 March 2026 00:55:34 +0000 (0:00:01.094) 0:09:19.859 ******** 2026-03-07 00:58:57.489768 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-07 00:58:57.489774 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-07 00:58:57.489784 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-07 00:58:57.489791 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-07 00:58:57.489802 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-07 00:58:57.489808 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-07 00:58:57.489814 | orchestrator | 2026-03-07 00:58:57.489821 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-07 00:58:57.489827 | orchestrator | Saturday 07 March 2026 00:55:36 +0000 (0:00:02.226) 0:09:22.086 ******** 2026-03-07 00:58:57.489833 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-07 00:58:57.489839 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-07 00:58:57.489883 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-07 00:58:57.489891 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-07 00:58:57.489897 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-07 00:58:57.489903 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-07 00:58:57.489910 | orchestrator | 2026-03-07 00:58:57.489916 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-07 00:58:57.489922 | orchestrator | Saturday 07 March 2026 00:55:41 +0000 (0:00:04.968) 0:09:27.055 ******** 2026-03-07 00:58:57.489928 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.489934 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.489941 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:58:57.489947 | orchestrator | 2026-03-07 00:58:57.489953 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-07 00:58:57.489959 | orchestrator | Saturday 07 March 2026 00:55:44 +0000 (0:00:02.677) 0:09:29.733 ******** 2026-03-07 00:58:57.489966 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.489972 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.489978 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-07 00:58:57.489984 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:58:57.489990 | orchestrator | 2026-03-07 00:58:57.489997 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-07 00:58:57.490003 | orchestrator | Saturday 07 March 2026 00:55:57 +0000 (0:00:12.612) 0:09:42.345 ******** 2026-03-07 00:58:57.490009 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490039 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.490046 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.490052 | orchestrator | 2026-03-07 00:58:57.490059 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:58:57.490065 | orchestrator | Saturday 07 March 2026 00:55:58 +0000 (0:00:01.311) 0:09:43.656 ******** 2026-03-07 00:58:57.490074 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490080 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.490086 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.490092 | orchestrator | 2026-03-07 00:58:57.490098 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-07 00:58:57.490110 | orchestrator | Saturday 07 March 2026 00:55:58 +0000 (0:00:00.444) 0:09:44.100 ******** 2026-03-07 00:58:57.490116 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.490123 | orchestrator | 2026-03-07 00:58:57.490129 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-07 00:58:57.490135 | orchestrator | Saturday 07 March 2026 00:55:59 +0000 (0:00:00.892) 0:09:44.993 ******** 2026-03-07 00:58:57.490141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.490148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.490154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.490160 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490166 | orchestrator | 2026-03-07 00:58:57.490173 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-07 00:58:57.490179 | orchestrator | Saturday 07 March 2026 00:56:00 +0000 (0:00:00.452) 0:09:45.446 ******** 2026-03-07 00:58:57.490191 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490197 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.490203 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.490209 | orchestrator | 2026-03-07 00:58:57.490216 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-07 00:58:57.490222 | orchestrator | Saturday 07 March 2026 00:56:00 +0000 (0:00:00.365) 0:09:45.812 ******** 2026-03-07 00:58:57.490228 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490234 | orchestrator | 2026-03-07 00:58:57.490240 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-07 00:58:57.490247 | orchestrator | Saturday 07 March 2026 00:56:00 +0000 (0:00:00.232) 0:09:46.044 ******** 2026-03-07 00:58:57.490253 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490259 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.490265 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.490272 | orchestrator | 2026-03-07 00:58:57.490278 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-07 00:58:57.490284 | orchestrator | Saturday 07 March 2026 00:56:01 +0000 (0:00:00.363) 0:09:46.407 ******** 2026-03-07 00:58:57.490290 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490297 | orchestrator | 2026-03-07 00:58:57.490303 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-07 00:58:57.490309 | orchestrator | Saturday 07 March 2026 00:56:01 +0000 (0:00:00.214) 0:09:46.622 ******** 2026-03-07 00:58:57.490315 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490322 | orchestrator | 2026-03-07 00:58:57.490328 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-07 00:58:57.490335 | orchestrator | Saturday 07 March 2026 00:56:01 +0000 (0:00:00.242) 0:09:46.865 ******** 2026-03-07 00:58:57.490341 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490347 | orchestrator | 2026-03-07 00:58:57.490353 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-07 00:58:57.490360 | orchestrator | Saturday 07 March 2026 00:56:01 +0000 (0:00:00.144) 0:09:47.009 ******** 2026-03-07 00:58:57.490370 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490377 | orchestrator | 2026-03-07 00:58:57.490383 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-07 00:58:57.490389 | orchestrator | Saturday 07 March 2026 00:56:02 +0000 (0:00:00.889) 0:09:47.899 ******** 2026-03-07 00:58:57.490395 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490402 | orchestrator | 2026-03-07 00:58:57.490408 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-07 00:58:57.490414 | orchestrator | Saturday 07 March 2026 00:56:02 +0000 (0:00:00.215) 0:09:48.114 ******** 2026-03-07 00:58:57.490420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.490426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.490433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.490439 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490445 | orchestrator | 2026-03-07 00:58:57.490451 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-07 00:58:57.490456 | orchestrator | Saturday 07 March 2026 00:56:03 +0000 (0:00:00.422) 0:09:48.537 ******** 2026-03-07 00:58:57.490462 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490467 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.490473 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.490478 | orchestrator | 2026-03-07 00:58:57.490483 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-07 00:58:57.490491 | orchestrator | Saturday 07 March 2026 00:56:03 +0000 (0:00:00.336) 0:09:48.873 ******** 2026-03-07 00:58:57.490499 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490507 | orchestrator | 2026-03-07 00:58:57.490517 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-07 00:58:57.490532 | orchestrator | Saturday 07 March 2026 00:56:03 +0000 (0:00:00.263) 0:09:49.137 ******** 2026-03-07 00:58:57.490541 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490549 | orchestrator | 2026-03-07 00:58:57.490558 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-07 00:58:57.490567 | orchestrator | 2026-03-07 00:58:57.490576 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:58:57.490585 | orchestrator | Saturday 07 March 2026 00:56:04 +0000 (0:00:01.004) 0:09:50.141 ******** 2026-03-07 00:58:57.490594 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.490604 | orchestrator | 2026-03-07 00:58:57.490613 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:58:57.490621 | orchestrator | Saturday 07 March 2026 00:56:06 +0000 (0:00:01.325) 0:09:51.467 ******** 2026-03-07 00:58:57.490636 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.490642 | orchestrator | 2026-03-07 00:58:57.490648 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:58:57.490653 | orchestrator | Saturday 07 March 2026 00:56:07 +0000 (0:00:01.392) 0:09:52.859 ******** 2026-03-07 00:58:57.490658 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490663 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.490669 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.490674 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.490680 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.490685 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.490690 | orchestrator | 2026-03-07 00:58:57.490695 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:58:57.490701 | orchestrator | Saturday 07 March 2026 00:56:08 +0000 (0:00:01.078) 0:09:53.937 ******** 2026-03-07 00:58:57.490706 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.490711 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.490717 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.490722 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.490727 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.490733 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.490738 | orchestrator | 2026-03-07 00:58:57.490744 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:58:57.490749 | orchestrator | Saturday 07 March 2026 00:56:09 +0000 (0:00:00.789) 0:09:54.727 ******** 2026-03-07 00:58:57.490754 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.490760 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.490765 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.490770 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.490775 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.490781 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.490786 | orchestrator | 2026-03-07 00:58:57.490791 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:58:57.490797 | orchestrator | Saturday 07 March 2026 00:56:10 +0000 (0:00:01.056) 0:09:55.783 ******** 2026-03-07 00:58:57.490802 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.490807 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.490812 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.490818 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.490823 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.490828 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.490834 | orchestrator | 2026-03-07 00:58:57.490839 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:58:57.490844 | orchestrator | Saturday 07 March 2026 00:56:11 +0000 (0:00:00.702) 0:09:56.485 ******** 2026-03-07 00:58:57.490866 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490880 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.490885 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.490891 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.490896 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.490902 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.490907 | orchestrator | 2026-03-07 00:58:57.490913 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:58:57.490923 | orchestrator | Saturday 07 March 2026 00:56:12 +0000 (0:00:01.336) 0:09:57.822 ******** 2026-03-07 00:58:57.490929 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490934 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.490940 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.490945 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.490950 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.490956 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.490961 | orchestrator | 2026-03-07 00:58:57.490966 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:58:57.490972 | orchestrator | Saturday 07 March 2026 00:56:13 +0000 (0:00:00.723) 0:09:58.545 ******** 2026-03-07 00:58:57.490977 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.490982 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.490987 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.490993 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.490998 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.491003 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.491008 | orchestrator | 2026-03-07 00:58:57.491014 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:58:57.491019 | orchestrator | Saturday 07 March 2026 00:56:14 +0000 (0:00:01.008) 0:09:59.553 ******** 2026-03-07 00:58:57.491025 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.491030 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.491035 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.491041 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.491046 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.491051 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.491056 | orchestrator | 2026-03-07 00:58:57.491062 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:58:57.491067 | orchestrator | Saturday 07 March 2026 00:56:15 +0000 (0:00:00.992) 0:10:00.546 ******** 2026-03-07 00:58:57.491073 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.491078 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.491083 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.491088 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.491094 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.491099 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.491104 | orchestrator | 2026-03-07 00:58:57.491110 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:58:57.491115 | orchestrator | Saturday 07 March 2026 00:56:17 +0000 (0:00:01.699) 0:10:02.246 ******** 2026-03-07 00:58:57.491121 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.491126 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.491131 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.491137 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.491142 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.491147 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.491153 | orchestrator | 2026-03-07 00:58:57.491158 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:58:57.491163 | orchestrator | Saturday 07 March 2026 00:56:17 +0000 (0:00:00.753) 0:10:02.999 ******** 2026-03-07 00:58:57.491169 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.491174 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.491184 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.491189 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.491195 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.491205 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.491210 | orchestrator | 2026-03-07 00:58:57.491216 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:58:57.491221 | orchestrator | Saturday 07 March 2026 00:56:18 +0000 (0:00:00.928) 0:10:03.928 ******** 2026-03-07 00:58:57.491226 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.491232 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.491237 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.491243 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.491248 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.491254 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.491259 | orchestrator | 2026-03-07 00:58:57.491264 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:58:57.491270 | orchestrator | Saturday 07 March 2026 00:56:19 +0000 (0:00:00.664) 0:10:04.593 ******** 2026-03-07 00:58:57.491275 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.491280 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.491286 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.491291 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.491296 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.491302 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.491307 | orchestrator | 2026-03-07 00:58:57.491313 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:58:57.491318 | orchestrator | Saturday 07 March 2026 00:56:20 +0000 (0:00:00.954) 0:10:05.547 ******** 2026-03-07 00:58:57.491324 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.491329 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.491334 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.491340 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.491345 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.491350 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.491356 | orchestrator | 2026-03-07 00:58:57.491361 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:58:57.491367 | orchestrator | Saturday 07 March 2026 00:56:21 +0000 (0:00:00.632) 0:10:06.180 ******** 2026-03-07 00:58:57.491372 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.491378 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.491383 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.491388 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.491394 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.491399 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.491405 | orchestrator | 2026-03-07 00:58:57.491410 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:58:57.491415 | orchestrator | Saturday 07 March 2026 00:56:21 +0000 (0:00:00.950) 0:10:07.130 ******** 2026-03-07 00:58:57.491421 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.491426 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.491432 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.491437 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:58:57.491442 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:58:57.491448 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:58:57.491453 | orchestrator | 2026-03-07 00:58:57.491459 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:58:57.491467 | orchestrator | Saturday 07 March 2026 00:56:22 +0000 (0:00:00.656) 0:10:07.786 ******** 2026-03-07 00:58:57.491473 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.491478 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.491484 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.491493 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.491503 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.491511 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.491520 | orchestrator | 2026-03-07 00:58:57.491529 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:58:57.491539 | orchestrator | Saturday 07 March 2026 00:56:23 +0000 (0:00:00.994) 0:10:08.781 ******** 2026-03-07 00:58:57.491555 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.491565 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.491575 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.491581 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.491586 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.491591 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.491596 | orchestrator | 2026-03-07 00:58:57.491602 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:58:57.491607 | orchestrator | Saturday 07 March 2026 00:56:24 +0000 (0:00:00.735) 0:10:09.516 ******** 2026-03-07 00:58:57.491612 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.491618 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.491623 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.491628 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.491633 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.491639 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.491644 | orchestrator | 2026-03-07 00:58:57.491649 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-07 00:58:57.491655 | orchestrator | Saturday 07 March 2026 00:56:25 +0000 (0:00:01.468) 0:10:10.984 ******** 2026-03-07 00:58:57.491660 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:58:57.491665 | orchestrator | 2026-03-07 00:58:57.491671 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-07 00:58:57.491676 | orchestrator | Saturday 07 March 2026 00:56:29 +0000 (0:00:03.541) 0:10:14.526 ******** 2026-03-07 00:58:57.491681 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:58:57.491687 | orchestrator | 2026-03-07 00:58:57.491692 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-07 00:58:57.491697 | orchestrator | Saturday 07 March 2026 00:56:31 +0000 (0:00:02.349) 0:10:16.875 ******** 2026-03-07 00:58:57.491703 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.491708 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.491713 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.491720 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.491729 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.491738 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.491746 | orchestrator | 2026-03-07 00:58:57.491755 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-07 00:58:57.491764 | orchestrator | Saturday 07 March 2026 00:56:33 +0000 (0:00:01.790) 0:10:18.666 ******** 2026-03-07 00:58:57.491779 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.491789 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.491794 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.491799 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.491805 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.491810 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.491815 | orchestrator | 2026-03-07 00:58:57.491821 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-07 00:58:57.491826 | orchestrator | Saturday 07 March 2026 00:56:34 +0000 (0:00:01.037) 0:10:19.704 ******** 2026-03-07 00:58:57.491832 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.491839 | orchestrator | 2026-03-07 00:58:57.491844 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-07 00:58:57.491867 | orchestrator | Saturday 07 March 2026 00:56:36 +0000 (0:00:01.761) 0:10:21.465 ******** 2026-03-07 00:58:57.491872 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.491878 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.491883 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.491889 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.491894 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.491899 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.491910 | orchestrator | 2026-03-07 00:58:57.491915 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-07 00:58:57.491921 | orchestrator | Saturday 07 March 2026 00:56:38 +0000 (0:00:02.123) 0:10:23.588 ******** 2026-03-07 00:58:57.491926 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.491931 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.491937 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.491942 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.491947 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.491953 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.491958 | orchestrator | 2026-03-07 00:58:57.491963 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-07 00:58:57.491969 | orchestrator | Saturday 07 March 2026 00:56:41 +0000 (0:00:02.965) 0:10:26.554 ******** 2026-03-07 00:58:57.491975 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:58:57.491980 | orchestrator | 2026-03-07 00:58:57.491986 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-07 00:58:57.491991 | orchestrator | Saturday 07 March 2026 00:56:42 +0000 (0:00:01.250) 0:10:27.805 ******** 2026-03-07 00:58:57.491996 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492002 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492007 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492013 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.492018 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.492023 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.492029 | orchestrator | 2026-03-07 00:58:57.492034 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-07 00:58:57.492044 | orchestrator | Saturday 07 March 2026 00:56:43 +0000 (0:00:00.812) 0:10:28.617 ******** 2026-03-07 00:58:57.492050 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.492055 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.492061 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.492066 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:58:57.492071 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:58:57.492077 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:58:57.492082 | orchestrator | 2026-03-07 00:58:57.492087 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-07 00:58:57.492093 | orchestrator | Saturday 07 March 2026 00:56:45 +0000 (0:00:02.303) 0:10:30.921 ******** 2026-03-07 00:58:57.492098 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492104 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492109 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492114 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:58:57.492120 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:58:57.492125 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:58:57.492130 | orchestrator | 2026-03-07 00:58:57.492136 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-07 00:58:57.492141 | orchestrator | 2026-03-07 00:58:57.492146 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:58:57.492152 | orchestrator | Saturday 07 March 2026 00:56:47 +0000 (0:00:01.255) 0:10:32.177 ******** 2026-03-07 00:58:57.492157 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.492163 | orchestrator | 2026-03-07 00:58:57.492168 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:58:57.492174 | orchestrator | Saturday 07 March 2026 00:56:47 +0000 (0:00:00.580) 0:10:32.758 ******** 2026-03-07 00:58:57.492179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.492184 | orchestrator | 2026-03-07 00:58:57.492190 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:58:57.492200 | orchestrator | Saturday 07 March 2026 00:56:48 +0000 (0:00:01.037) 0:10:33.795 ******** 2026-03-07 00:58:57.492205 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.492211 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.492216 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.492222 | orchestrator | 2026-03-07 00:58:57.492227 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:58:57.492232 | orchestrator | Saturday 07 March 2026 00:56:49 +0000 (0:00:00.388) 0:10:34.183 ******** 2026-03-07 00:58:57.492238 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492243 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492248 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492254 | orchestrator | 2026-03-07 00:58:57.492259 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:58:57.492268 | orchestrator | Saturday 07 March 2026 00:56:49 +0000 (0:00:00.714) 0:10:34.898 ******** 2026-03-07 00:58:57.492273 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492279 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492284 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492289 | orchestrator | 2026-03-07 00:58:57.492295 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:58:57.492300 | orchestrator | Saturday 07 March 2026 00:56:50 +0000 (0:00:01.146) 0:10:36.045 ******** 2026-03-07 00:58:57.492305 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492311 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492316 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492321 | orchestrator | 2026-03-07 00:58:57.492327 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:58:57.492332 | orchestrator | Saturday 07 March 2026 00:56:51 +0000 (0:00:00.753) 0:10:36.798 ******** 2026-03-07 00:58:57.492337 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.492343 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.492348 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.492354 | orchestrator | 2026-03-07 00:58:57.492359 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:58:57.492365 | orchestrator | Saturday 07 March 2026 00:56:51 +0000 (0:00:00.313) 0:10:37.112 ******** 2026-03-07 00:58:57.492370 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.492375 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.492381 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.492386 | orchestrator | 2026-03-07 00:58:57.492392 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:58:57.492397 | orchestrator | Saturday 07 March 2026 00:56:52 +0000 (0:00:00.368) 0:10:37.481 ******** 2026-03-07 00:58:57.492403 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.492408 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.492414 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.492419 | orchestrator | 2026-03-07 00:58:57.492424 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:58:57.492430 | orchestrator | Saturday 07 March 2026 00:56:53 +0000 (0:00:00.770) 0:10:38.252 ******** 2026-03-07 00:58:57.492435 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492441 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492446 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492451 | orchestrator | 2026-03-07 00:58:57.492457 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:58:57.492462 | orchestrator | Saturday 07 March 2026 00:56:53 +0000 (0:00:00.798) 0:10:39.051 ******** 2026-03-07 00:58:57.492467 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492472 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492478 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492483 | orchestrator | 2026-03-07 00:58:57.492488 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:58:57.492494 | orchestrator | Saturday 07 March 2026 00:56:54 +0000 (0:00:00.758) 0:10:39.809 ******** 2026-03-07 00:58:57.492499 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.492510 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.492515 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.492521 | orchestrator | 2026-03-07 00:58:57.492526 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:58:57.492552 | orchestrator | Saturday 07 March 2026 00:56:54 +0000 (0:00:00.286) 0:10:40.096 ******** 2026-03-07 00:58:57.492558 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.492563 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.492569 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.492574 | orchestrator | 2026-03-07 00:58:57.492579 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:58:57.492585 | orchestrator | Saturday 07 March 2026 00:56:55 +0000 (0:00:00.557) 0:10:40.653 ******** 2026-03-07 00:58:57.492590 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492596 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492601 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492606 | orchestrator | 2026-03-07 00:58:57.492612 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:58:57.492617 | orchestrator | Saturday 07 March 2026 00:56:55 +0000 (0:00:00.362) 0:10:41.015 ******** 2026-03-07 00:58:57.492622 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492628 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492633 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492638 | orchestrator | 2026-03-07 00:58:57.492644 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:58:57.492649 | orchestrator | Saturday 07 March 2026 00:56:56 +0000 (0:00:00.419) 0:10:41.434 ******** 2026-03-07 00:58:57.492654 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492660 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492665 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492670 | orchestrator | 2026-03-07 00:58:57.492675 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:58:57.492681 | orchestrator | Saturday 07 March 2026 00:56:56 +0000 (0:00:00.414) 0:10:41.849 ******** 2026-03-07 00:58:57.492686 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.492692 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.492697 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.492702 | orchestrator | 2026-03-07 00:58:57.492708 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:58:57.492713 | orchestrator | Saturday 07 March 2026 00:56:57 +0000 (0:00:00.691) 0:10:42.541 ******** 2026-03-07 00:58:57.492719 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.492724 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.492729 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.492735 | orchestrator | 2026-03-07 00:58:57.492740 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:58:57.492745 | orchestrator | Saturday 07 March 2026 00:56:57 +0000 (0:00:00.353) 0:10:42.895 ******** 2026-03-07 00:58:57.492751 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.492756 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.492761 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.492767 | orchestrator | 2026-03-07 00:58:57.492772 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:58:57.492777 | orchestrator | Saturday 07 March 2026 00:56:58 +0000 (0:00:00.367) 0:10:43.262 ******** 2026-03-07 00:58:57.492783 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492791 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492797 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492802 | orchestrator | 2026-03-07 00:58:57.492807 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:58:57.492813 | orchestrator | Saturday 07 March 2026 00:56:58 +0000 (0:00:00.534) 0:10:43.797 ******** 2026-03-07 00:58:57.492818 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.492824 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.492833 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.492839 | orchestrator | 2026-03-07 00:58:57.492844 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-07 00:58:57.492888 | orchestrator | Saturday 07 March 2026 00:56:59 +0000 (0:00:01.043) 0:10:44.841 ******** 2026-03-07 00:58:57.492898 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.492906 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.492914 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-07 00:58:57.492923 | orchestrator | 2026-03-07 00:58:57.492930 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-07 00:58:57.492935 | orchestrator | Saturday 07 March 2026 00:57:00 +0000 (0:00:00.535) 0:10:45.376 ******** 2026-03-07 00:58:57.492941 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:58:57.492946 | orchestrator | 2026-03-07 00:58:57.492952 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-07 00:58:57.492957 | orchestrator | Saturday 07 March 2026 00:57:02 +0000 (0:00:02.249) 0:10:47.626 ******** 2026-03-07 00:58:57.492965 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-07 00:58:57.492973 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.492979 | orchestrator | 2026-03-07 00:58:57.492984 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-07 00:58:57.492989 | orchestrator | Saturday 07 March 2026 00:57:03 +0000 (0:00:00.540) 0:10:48.166 ******** 2026-03-07 00:58:57.492997 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 00:58:57.493011 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 00:58:57.493016 | orchestrator | 2026-03-07 00:58:57.493025 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-07 00:58:57.493031 | orchestrator | Saturday 07 March 2026 00:57:11 +0000 (0:00:08.516) 0:10:56.683 ******** 2026-03-07 00:58:57.493037 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:58:57.493042 | orchestrator | 2026-03-07 00:58:57.493047 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-07 00:58:57.493053 | orchestrator | Saturday 07 March 2026 00:57:14 +0000 (0:00:03.477) 0:11:00.160 ******** 2026-03-07 00:58:57.493060 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.493068 | orchestrator | 2026-03-07 00:58:57.493081 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-07 00:58:57.493094 | orchestrator | Saturday 07 March 2026 00:57:16 +0000 (0:00:01.305) 0:11:01.466 ******** 2026-03-07 00:58:57.493103 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-07 00:58:57.493111 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-07 00:58:57.493119 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-07 00:58:57.493126 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-07 00:58:57.493134 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-07 00:58:57.493142 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-07 00:58:57.493150 | orchestrator | 2026-03-07 00:58:57.493158 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-07 00:58:57.493173 | orchestrator | Saturday 07 March 2026 00:57:18 +0000 (0:00:01.867) 0:11:03.334 ******** 2026-03-07 00:58:57.493180 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.493188 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:58:57.493195 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:58:57.493200 | orchestrator | 2026-03-07 00:58:57.493205 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-07 00:58:57.493209 | orchestrator | Saturday 07 March 2026 00:57:20 +0000 (0:00:02.448) 0:11:05.783 ******** 2026-03-07 00:58:57.493214 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 00:58:57.493219 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:58:57.493224 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.493229 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 00:58:57.493233 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 00:58:57.493238 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-07 00:58:57.493243 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-07 00:58:57.493252 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.493257 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.493262 | orchestrator | 2026-03-07 00:58:57.493267 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-07 00:58:57.493271 | orchestrator | Saturday 07 March 2026 00:57:22 +0000 (0:00:01.388) 0:11:07.171 ******** 2026-03-07 00:58:57.493276 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.493281 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.493286 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.493290 | orchestrator | 2026-03-07 00:58:57.493295 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-07 00:58:57.493300 | orchestrator | Saturday 07 March 2026 00:57:25 +0000 (0:00:03.194) 0:11:10.366 ******** 2026-03-07 00:58:57.493305 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.493310 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.493314 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.493319 | orchestrator | 2026-03-07 00:58:57.493324 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-07 00:58:57.493328 | orchestrator | Saturday 07 March 2026 00:57:25 +0000 (0:00:00.320) 0:11:10.686 ******** 2026-03-07 00:58:57.493333 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.493338 | orchestrator | 2026-03-07 00:58:57.493343 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-07 00:58:57.493348 | orchestrator | Saturday 07 March 2026 00:57:26 +0000 (0:00:00.693) 0:11:11.379 ******** 2026-03-07 00:58:57.493352 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.493357 | orchestrator | 2026-03-07 00:58:57.493362 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-07 00:58:57.493367 | orchestrator | Saturday 07 March 2026 00:57:26 +0000 (0:00:00.517) 0:11:11.896 ******** 2026-03-07 00:58:57.493372 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.493377 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.493382 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.493386 | orchestrator | 2026-03-07 00:58:57.493391 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-07 00:58:57.493396 | orchestrator | Saturday 07 March 2026 00:57:28 +0000 (0:00:01.392) 0:11:13.289 ******** 2026-03-07 00:58:57.493401 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.493405 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.493410 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.493415 | orchestrator | 2026-03-07 00:58:57.493420 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-07 00:58:57.493429 | orchestrator | Saturday 07 March 2026 00:57:29 +0000 (0:00:01.532) 0:11:14.822 ******** 2026-03-07 00:58:57.493434 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.493439 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.493444 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.493449 | orchestrator | 2026-03-07 00:58:57.493453 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-07 00:58:57.493463 | orchestrator | Saturday 07 March 2026 00:57:31 +0000 (0:00:02.004) 0:11:16.826 ******** 2026-03-07 00:58:57.493468 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.493472 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.493477 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.493482 | orchestrator | 2026-03-07 00:58:57.493487 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-07 00:58:57.493491 | orchestrator | Saturday 07 March 2026 00:57:33 +0000 (0:00:02.149) 0:11:18.975 ******** 2026-03-07 00:58:57.493496 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.493501 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.493506 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.493510 | orchestrator | 2026-03-07 00:58:57.493515 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:58:57.493520 | orchestrator | Saturday 07 March 2026 00:57:35 +0000 (0:00:01.713) 0:11:20.689 ******** 2026-03-07 00:58:57.493525 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.493529 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.493534 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.493539 | orchestrator | 2026-03-07 00:58:57.493543 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-07 00:58:57.493548 | orchestrator | Saturday 07 March 2026 00:57:36 +0000 (0:00:00.652) 0:11:21.341 ******** 2026-03-07 00:58:57.493553 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.493558 | orchestrator | 2026-03-07 00:58:57.493563 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-07 00:58:57.493567 | orchestrator | Saturday 07 March 2026 00:57:36 +0000 (0:00:00.686) 0:11:22.028 ******** 2026-03-07 00:58:57.493572 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.493577 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.493581 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.493586 | orchestrator | 2026-03-07 00:58:57.493591 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-07 00:58:57.493596 | orchestrator | Saturday 07 March 2026 00:57:37 +0000 (0:00:00.341) 0:11:22.370 ******** 2026-03-07 00:58:57.493601 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.493605 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.493610 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.493615 | orchestrator | 2026-03-07 00:58:57.493619 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-07 00:58:57.493624 | orchestrator | Saturday 07 March 2026 00:57:38 +0000 (0:00:01.112) 0:11:23.483 ******** 2026-03-07 00:58:57.493629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.493634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.493639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.493643 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.493648 | orchestrator | 2026-03-07 00:58:57.493653 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-07 00:58:57.493661 | orchestrator | Saturday 07 March 2026 00:57:39 +0000 (0:00:00.824) 0:11:24.307 ******** 2026-03-07 00:58:57.493666 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.493671 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.493675 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.493680 | orchestrator | 2026-03-07 00:58:57.493685 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-07 00:58:57.493696 | orchestrator | 2026-03-07 00:58:57.493701 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:58:57.493706 | orchestrator | Saturday 07 March 2026 00:57:39 +0000 (0:00:00.770) 0:11:25.078 ******** 2026-03-07 00:58:57.493711 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.493715 | orchestrator | 2026-03-07 00:58:57.493720 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:58:57.493725 | orchestrator | Saturday 07 March 2026 00:57:40 +0000 (0:00:00.524) 0:11:25.602 ******** 2026-03-07 00:58:57.493730 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.493734 | orchestrator | 2026-03-07 00:58:57.493739 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:58:57.493744 | orchestrator | Saturday 07 March 2026 00:57:41 +0000 (0:00:00.673) 0:11:26.276 ******** 2026-03-07 00:58:57.493749 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.493754 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.493758 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.493763 | orchestrator | 2026-03-07 00:58:57.493768 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:58:57.493772 | orchestrator | Saturday 07 March 2026 00:57:41 +0000 (0:00:00.317) 0:11:26.594 ******** 2026-03-07 00:58:57.493777 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.493782 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.493786 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.493791 | orchestrator | 2026-03-07 00:58:57.493796 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:58:57.493801 | orchestrator | Saturday 07 March 2026 00:57:42 +0000 (0:00:00.715) 0:11:27.309 ******** 2026-03-07 00:58:57.493806 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.493810 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.493815 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.493820 | orchestrator | 2026-03-07 00:58:57.493825 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:58:57.493829 | orchestrator | Saturday 07 March 2026 00:57:42 +0000 (0:00:00.829) 0:11:28.138 ******** 2026-03-07 00:58:57.493834 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.493839 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.493844 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.493863 | orchestrator | 2026-03-07 00:58:57.493868 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:58:57.493873 | orchestrator | Saturday 07 March 2026 00:57:43 +0000 (0:00:00.686) 0:11:28.825 ******** 2026-03-07 00:58:57.493878 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.493886 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.493891 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.493896 | orchestrator | 2026-03-07 00:58:57.493901 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:58:57.493905 | orchestrator | Saturday 07 March 2026 00:57:43 +0000 (0:00:00.294) 0:11:29.119 ******** 2026-03-07 00:58:57.493910 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.493915 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.493920 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.493925 | orchestrator | 2026-03-07 00:58:57.493929 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:58:57.493934 | orchestrator | Saturday 07 March 2026 00:57:44 +0000 (0:00:00.334) 0:11:29.454 ******** 2026-03-07 00:58:57.493939 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.493944 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.493949 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.493953 | orchestrator | 2026-03-07 00:58:57.493958 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:58:57.493963 | orchestrator | Saturday 07 March 2026 00:57:44 +0000 (0:00:00.627) 0:11:30.082 ******** 2026-03-07 00:58:57.493972 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.493977 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.493982 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.493986 | orchestrator | 2026-03-07 00:58:57.493991 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:58:57.493996 | orchestrator | Saturday 07 March 2026 00:57:45 +0000 (0:00:00.740) 0:11:30.822 ******** 2026-03-07 00:58:57.494001 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.494006 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.494011 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.494054 | orchestrator | 2026-03-07 00:58:57.494060 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:58:57.494064 | orchestrator | Saturday 07 March 2026 00:57:46 +0000 (0:00:00.801) 0:11:31.624 ******** 2026-03-07 00:58:57.494069 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.494074 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.494079 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.494084 | orchestrator | 2026-03-07 00:58:57.494089 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:58:57.494094 | orchestrator | Saturday 07 March 2026 00:57:46 +0000 (0:00:00.356) 0:11:31.980 ******** 2026-03-07 00:58:57.494099 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.494103 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.494108 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.494113 | orchestrator | 2026-03-07 00:58:57.494118 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:58:57.494123 | orchestrator | Saturday 07 March 2026 00:57:47 +0000 (0:00:00.689) 0:11:32.670 ******** 2026-03-07 00:58:57.494128 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.494132 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.494137 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.494142 | orchestrator | 2026-03-07 00:58:57.494151 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:58:57.494156 | orchestrator | Saturday 07 March 2026 00:57:47 +0000 (0:00:00.367) 0:11:33.037 ******** 2026-03-07 00:58:57.494161 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.494166 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.494171 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.494175 | orchestrator | 2026-03-07 00:58:57.494180 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:58:57.494185 | orchestrator | Saturday 07 March 2026 00:57:48 +0000 (0:00:00.387) 0:11:33.424 ******** 2026-03-07 00:58:57.494190 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.494195 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.494200 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.494204 | orchestrator | 2026-03-07 00:58:57.494209 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:58:57.494214 | orchestrator | Saturday 07 March 2026 00:57:48 +0000 (0:00:00.421) 0:11:33.846 ******** 2026-03-07 00:58:57.494219 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.494224 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.494228 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.494233 | orchestrator | 2026-03-07 00:58:57.494238 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:58:57.494243 | orchestrator | Saturday 07 March 2026 00:57:49 +0000 (0:00:00.641) 0:11:34.488 ******** 2026-03-07 00:58:57.494248 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.494252 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.494257 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.494262 | orchestrator | 2026-03-07 00:58:57.494267 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:58:57.494271 | orchestrator | Saturday 07 March 2026 00:57:49 +0000 (0:00:00.347) 0:11:34.836 ******** 2026-03-07 00:58:57.494276 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.494285 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.494290 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.494295 | orchestrator | 2026-03-07 00:58:57.494299 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:58:57.494304 | orchestrator | Saturday 07 March 2026 00:57:50 +0000 (0:00:00.380) 0:11:35.216 ******** 2026-03-07 00:58:57.494309 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.494314 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.494319 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.494323 | orchestrator | 2026-03-07 00:58:57.494328 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:58:57.494333 | orchestrator | Saturday 07 March 2026 00:57:50 +0000 (0:00:00.459) 0:11:35.676 ******** 2026-03-07 00:58:57.494338 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.494343 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.494347 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.494352 | orchestrator | 2026-03-07 00:58:57.494357 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-07 00:58:57.494362 | orchestrator | Saturday 07 March 2026 00:57:51 +0000 (0:00:00.930) 0:11:36.606 ******** 2026-03-07 00:58:57.494367 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.494371 | orchestrator | 2026-03-07 00:58:57.494379 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-07 00:58:57.494384 | orchestrator | Saturday 07 March 2026 00:57:52 +0000 (0:00:00.650) 0:11:37.257 ******** 2026-03-07 00:58:57.494389 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.494394 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:58:57.494399 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:58:57.494404 | orchestrator | 2026-03-07 00:58:57.494408 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-07 00:58:57.494413 | orchestrator | Saturday 07 March 2026 00:57:54 +0000 (0:00:02.223) 0:11:39.480 ******** 2026-03-07 00:58:57.494418 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 00:58:57.494423 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:58:57.494428 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.494432 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 00:58:57.494437 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-07 00:58:57.494442 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.494447 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 00:58:57.494452 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-07 00:58:57.494456 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.494461 | orchestrator | 2026-03-07 00:58:57.494466 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-07 00:58:57.494471 | orchestrator | Saturday 07 March 2026 00:57:55 +0000 (0:00:01.569) 0:11:41.050 ******** 2026-03-07 00:58:57.494475 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.494480 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.494485 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.494490 | orchestrator | 2026-03-07 00:58:57.494494 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-07 00:58:57.494499 | orchestrator | Saturday 07 March 2026 00:57:56 +0000 (0:00:00.384) 0:11:41.435 ******** 2026-03-07 00:58:57.494504 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.494509 | orchestrator | 2026-03-07 00:58:57.494514 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-07 00:58:57.494518 | orchestrator | Saturday 07 March 2026 00:57:56 +0000 (0:00:00.603) 0:11:42.038 ******** 2026-03-07 00:58:57.494523 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.494535 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.494540 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.494545 | orchestrator | 2026-03-07 00:58:57.494550 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-07 00:58:57.494555 | orchestrator | Saturday 07 March 2026 00:57:58 +0000 (0:00:01.570) 0:11:43.608 ******** 2026-03-07 00:58:57.494560 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.494564 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-07 00:58:57.494569 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.494574 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-07 00:58:57.494579 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.494584 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-07 00:58:57.494589 | orchestrator | 2026-03-07 00:58:57.494594 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-07 00:58:57.494598 | orchestrator | Saturday 07 March 2026 00:58:03 +0000 (0:00:05.315) 0:11:48.923 ******** 2026-03-07 00:58:57.494603 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.494608 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:58:57.494613 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.494618 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:58:57.494622 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:58:57.494627 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:58:57.494632 | orchestrator | 2026-03-07 00:58:57.494637 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-07 00:58:57.494642 | orchestrator | Saturday 07 March 2026 00:58:06 +0000 (0:00:02.402) 0:11:51.326 ******** 2026-03-07 00:58:57.494646 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 00:58:57.494651 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.494656 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 00:58:57.494661 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.494665 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 00:58:57.494670 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.494675 | orchestrator | 2026-03-07 00:58:57.494684 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-07 00:58:57.494689 | orchestrator | Saturday 07 March 2026 00:58:07 +0000 (0:00:01.197) 0:11:52.523 ******** 2026-03-07 00:58:57.494693 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-07 00:58:57.494698 | orchestrator | 2026-03-07 00:58:57.494703 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-07 00:58:57.494708 | orchestrator | Saturday 07 March 2026 00:58:07 +0000 (0:00:00.258) 0:11:52.782 ******** 2026-03-07 00:58:57.494712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:58:57.494717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:58:57.494726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:58:57.494731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:58:57.494736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:58:57.494741 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.494745 | orchestrator | 2026-03-07 00:58:57.494750 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-07 00:58:57.494755 | orchestrator | Saturday 07 March 2026 00:58:09 +0000 (0:00:01.389) 0:11:54.171 ******** 2026-03-07 00:58:57.494760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:58:57.494765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:58:57.494770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:58:57.494774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:58:57.494779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:58:57.494792 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.494797 | orchestrator | 2026-03-07 00:58:57.494802 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-07 00:58:57.494806 | orchestrator | Saturday 07 March 2026 00:58:09 +0000 (0:00:00.694) 0:11:54.866 ******** 2026-03-07 00:58:57.494811 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-07 00:58:57.494816 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-07 00:58:57.494821 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-07 00:58:57.494826 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-07 00:58:57.494831 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-07 00:58:57.494836 | orchestrator | 2026-03-07 00:58:57.494840 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-07 00:58:57.494859 | orchestrator | Saturday 07 March 2026 00:58:41 +0000 (0:00:31.718) 0:12:26.584 ******** 2026-03-07 00:58:57.494864 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.494869 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.494874 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.494879 | orchestrator | 2026-03-07 00:58:57.494883 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-07 00:58:57.494888 | orchestrator | Saturday 07 March 2026 00:58:41 +0000 (0:00:00.367) 0:12:26.952 ******** 2026-03-07 00:58:57.494893 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.494898 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.494903 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.494907 | orchestrator | 2026-03-07 00:58:57.494912 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-07 00:58:57.494917 | orchestrator | Saturday 07 March 2026 00:58:42 +0000 (0:00:00.366) 0:12:27.319 ******** 2026-03-07 00:58:57.494926 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.494931 | orchestrator | 2026-03-07 00:58:57.494936 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-07 00:58:57.494941 | orchestrator | Saturday 07 March 2026 00:58:43 +0000 (0:00:00.947) 0:12:28.266 ******** 2026-03-07 00:58:57.494949 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.494954 | orchestrator | 2026-03-07 00:58:57.494959 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-07 00:58:57.494964 | orchestrator | Saturday 07 March 2026 00:58:43 +0000 (0:00:00.692) 0:12:28.959 ******** 2026-03-07 00:58:57.494969 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.494973 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.494978 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.494983 | orchestrator | 2026-03-07 00:58:57.494988 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-07 00:58:57.494992 | orchestrator | Saturday 07 March 2026 00:58:45 +0000 (0:00:01.334) 0:12:30.293 ******** 2026-03-07 00:58:57.494997 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.495002 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.495007 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.495012 | orchestrator | 2026-03-07 00:58:57.495017 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-07 00:58:57.495022 | orchestrator | Saturday 07 March 2026 00:58:46 +0000 (0:00:01.549) 0:12:31.843 ******** 2026-03-07 00:58:57.495026 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:58:57.495031 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:58:57.495036 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:58:57.495041 | orchestrator | 2026-03-07 00:58:57.495046 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-07 00:58:57.495051 | orchestrator | Saturday 07 March 2026 00:58:48 +0000 (0:00:01.820) 0:12:33.663 ******** 2026-03-07 00:58:57.495055 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.495060 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.495065 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-07 00:58:57.495070 | orchestrator | 2026-03-07 00:58:57.495075 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:58:57.495080 | orchestrator | Saturday 07 March 2026 00:58:51 +0000 (0:00:02.734) 0:12:36.398 ******** 2026-03-07 00:58:57.495085 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.495089 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.495094 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.495099 | orchestrator | 2026-03-07 00:58:57.495104 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-07 00:58:57.495109 | orchestrator | Saturday 07 March 2026 00:58:51 +0000 (0:00:00.369) 0:12:36.767 ******** 2026-03-07 00:58:57.495117 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:58:57.495122 | orchestrator | 2026-03-07 00:58:57.495127 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-07 00:58:57.495132 | orchestrator | Saturday 07 March 2026 00:58:52 +0000 (0:00:00.699) 0:12:37.467 ******** 2026-03-07 00:58:57.495137 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.495141 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.495146 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.495151 | orchestrator | 2026-03-07 00:58:57.495156 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-07 00:58:57.495164 | orchestrator | Saturday 07 March 2026 00:58:52 +0000 (0:00:00.658) 0:12:38.125 ******** 2026-03-07 00:58:57.495169 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.495174 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:58:57.495179 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:58:57.495184 | orchestrator | 2026-03-07 00:58:57.495189 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-07 00:58:57.495193 | orchestrator | Saturday 07 March 2026 00:58:53 +0000 (0:00:00.365) 0:12:38.491 ******** 2026-03-07 00:58:57.495198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:58:57.495203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:58:57.495208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:58:57.495213 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:58:57.495217 | orchestrator | 2026-03-07 00:58:57.495222 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-07 00:58:57.495227 | orchestrator | Saturday 07 March 2026 00:58:54 +0000 (0:00:00.681) 0:12:39.172 ******** 2026-03-07 00:58:57.495232 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:58:57.495237 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:58:57.495242 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:58:57.495246 | orchestrator | 2026-03-07 00:58:57.495251 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:58:57.495256 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-07 00:58:57.495261 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-07 00:58:57.495266 | orchestrator | testbed-node-2 : ok=134  changed=34  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-07 00:58:57.495271 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-07 00:58:57.495276 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-07 00:58:57.495286 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-07 00:58:57.495291 | orchestrator | 2026-03-07 00:58:57.495296 | orchestrator | 2026-03-07 00:58:57.495301 | orchestrator | 2026-03-07 00:58:57.495305 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:58:57.495310 | orchestrator | Saturday 07 March 2026 00:58:54 +0000 (0:00:00.295) 0:12:39.468 ******** 2026-03-07 00:58:57.495315 | orchestrator | =============================================================================== 2026-03-07 00:58:57.495320 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 55.50s 2026-03-07 00:58:57.495325 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.02s 2026-03-07 00:58:57.495329 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.72s 2026-03-07 00:58:57.495334 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.07s 2026-03-07 00:58:57.495339 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.87s 2026-03-07 00:58:57.495344 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.20s 2026-03-07 00:58:57.495349 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.61s 2026-03-07 00:58:57.495354 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.41s 2026-03-07 00:58:57.495358 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.82s 2026-03-07 00:58:57.495363 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.52s 2026-03-07 00:58:57.495371 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.62s 2026-03-07 00:58:57.495376 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.57s 2026-03-07 00:58:57.495381 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.32s 2026-03-07 00:58:57.495385 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.11s 2026-03-07 00:58:57.495390 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.97s 2026-03-07 00:58:57.495395 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.65s 2026-03-07 00:58:57.495400 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.13s 2026-03-07 00:58:57.495405 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 4.06s 2026-03-07 00:58:57.495409 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 3.88s 2026-03-07 00:58:57.495414 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.62s 2026-03-07 00:58:57.495422 | orchestrator | 2026-03-07 00:58:57 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:58:57.495427 | orchestrator | 2026-03-07 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:00.512487 | orchestrator | 2026-03-07 00:59:00 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:00.513490 | orchestrator | 2026-03-07 00:59:00 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:00.514371 | orchestrator | 2026-03-07 00:59:00 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:00.514622 | orchestrator | 2026-03-07 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:03.558795 | orchestrator | 2026-03-07 00:59:03 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:03.560661 | orchestrator | 2026-03-07 00:59:03 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:03.562548 | orchestrator | 2026-03-07 00:59:03 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:03.562721 | orchestrator | 2026-03-07 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:06.611006 | orchestrator | 2026-03-07 00:59:06 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:06.612654 | orchestrator | 2026-03-07 00:59:06 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:06.615328 | orchestrator | 2026-03-07 00:59:06 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:06.615395 | orchestrator | 2026-03-07 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:09.664401 | orchestrator | 2026-03-07 00:59:09 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:09.665555 | orchestrator | 2026-03-07 00:59:09 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:09.667621 | orchestrator | 2026-03-07 00:59:09 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:09.667662 | orchestrator | 2026-03-07 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:12.702806 | orchestrator | 2026-03-07 00:59:12 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:12.704482 | orchestrator | 2026-03-07 00:59:12 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:12.706746 | orchestrator | 2026-03-07 00:59:12 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:12.707126 | orchestrator | 2026-03-07 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:15.741901 | orchestrator | 2026-03-07 00:59:15 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:15.743100 | orchestrator | 2026-03-07 00:59:15 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:15.744693 | orchestrator | 2026-03-07 00:59:15 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:15.744738 | orchestrator | 2026-03-07 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:18.788854 | orchestrator | 2026-03-07 00:59:18 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:18.790915 | orchestrator | 2026-03-07 00:59:18 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:18.794464 | orchestrator | 2026-03-07 00:59:18 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:18.794542 | orchestrator | 2026-03-07 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:21.832483 | orchestrator | 2026-03-07 00:59:21 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:21.834546 | orchestrator | 2026-03-07 00:59:21 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:21.836640 | orchestrator | 2026-03-07 00:59:21 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:21.836883 | orchestrator | 2026-03-07 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:24.879528 | orchestrator | 2026-03-07 00:59:24 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:24.880581 | orchestrator | 2026-03-07 00:59:24 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:24.881732 | orchestrator | 2026-03-07 00:59:24 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:24.882437 | orchestrator | 2026-03-07 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:27.931129 | orchestrator | 2026-03-07 00:59:27 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:27.934347 | orchestrator | 2026-03-07 00:59:27 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:27.937005 | orchestrator | 2026-03-07 00:59:27 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:27.937061 | orchestrator | 2026-03-07 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:30.981042 | orchestrator | 2026-03-07 00:59:30 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:30.981118 | orchestrator | 2026-03-07 00:59:30 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:30.982303 | orchestrator | 2026-03-07 00:59:30 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:30.982613 | orchestrator | 2026-03-07 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:34.030131 | orchestrator | 2026-03-07 00:59:34 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:34.032625 | orchestrator | 2026-03-07 00:59:34 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:34.035471 | orchestrator | 2026-03-07 00:59:34 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:34.035514 | orchestrator | 2026-03-07 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:37.095188 | orchestrator | 2026-03-07 00:59:37 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:37.095299 | orchestrator | 2026-03-07 00:59:37 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:37.095314 | orchestrator | 2026-03-07 00:59:37 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:37.095325 | orchestrator | 2026-03-07 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:40.135306 | orchestrator | 2026-03-07 00:59:40 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:40.136603 | orchestrator | 2026-03-07 00:59:40 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:40.138835 | orchestrator | 2026-03-07 00:59:40 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:40.138938 | orchestrator | 2026-03-07 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:43.184409 | orchestrator | 2026-03-07 00:59:43 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:43.187556 | orchestrator | 2026-03-07 00:59:43 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:43.188617 | orchestrator | 2026-03-07 00:59:43 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:43.188691 | orchestrator | 2026-03-07 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:46.241701 | orchestrator | 2026-03-07 00:59:46 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:46.243813 | orchestrator | 2026-03-07 00:59:46 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:46.246945 | orchestrator | 2026-03-07 00:59:46 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:46.247035 | orchestrator | 2026-03-07 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:49.287259 | orchestrator | 2026-03-07 00:59:49 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:49.287703 | orchestrator | 2026-03-07 00:59:49 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state STARTED 2026-03-07 00:59:49.291049 | orchestrator | 2026-03-07 00:59:49 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:49.291110 | orchestrator | 2026-03-07 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:52.330496 | orchestrator | 2026-03-07 00:59:52 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:52.336396 | orchestrator | 2026-03-07 00:59:52.336648 | orchestrator | 2026-03-07 00:59:52.336679 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:59:52.336697 | orchestrator | 2026-03-07 00:59:52.336716 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:59:52.336734 | orchestrator | Saturday 07 March 2026 00:56:45 +0000 (0:00:00.285) 0:00:00.285 ******** 2026-03-07 00:59:52.336745 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:52.336756 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:52.336766 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:52.336776 | orchestrator | 2026-03-07 00:59:52.336785 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:59:52.336795 | orchestrator | Saturday 07 March 2026 00:56:46 +0000 (0:00:00.344) 0:00:00.629 ******** 2026-03-07 00:59:52.336806 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-07 00:59:52.336816 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-07 00:59:52.336851 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-07 00:59:52.336861 | orchestrator | 2026-03-07 00:59:52.336871 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-07 00:59:52.336911 | orchestrator | 2026-03-07 00:59:52.336931 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-07 00:59:52.336942 | orchestrator | Saturday 07 March 2026 00:56:46 +0000 (0:00:00.499) 0:00:01.129 ******** 2026-03-07 00:59:52.336952 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:52.336962 | orchestrator | 2026-03-07 00:59:52.336972 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-07 00:59:52.336981 | orchestrator | Saturday 07 March 2026 00:56:47 +0000 (0:00:00.610) 0:00:01.740 ******** 2026-03-07 00:59:52.336991 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:59:52.337001 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:59:52.337010 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:59:52.337020 | orchestrator | 2026-03-07 00:59:52.337036 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-07 00:59:52.337057 | orchestrator | Saturday 07 March 2026 00:56:48 +0000 (0:00:00.688) 0:00:02.428 ******** 2026-03-07 00:59:52.337102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.337125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.337169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.337204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.337224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.337251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.337269 | orchestrator | 2026-03-07 00:59:52.337282 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-07 00:59:52.337293 | orchestrator | Saturday 07 March 2026 00:56:50 +0000 (0:00:02.079) 0:00:04.508 ******** 2026-03-07 00:59:52.337305 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:52.337316 | orchestrator | 2026-03-07 00:59:52.337327 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-07 00:59:52.337337 | orchestrator | Saturday 07 March 2026 00:56:50 +0000 (0:00:00.655) 0:00:05.164 ******** 2026-03-07 00:59:52.337361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.337381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.337413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.337439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.337459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.337478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.337489 | orchestrator | 2026-03-07 00:59:52.337500 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-07 00:59:52.337510 | orchestrator | Saturday 07 March 2026 00:56:53 +0000 (0:00:02.996) 0:00:08.160 ******** 2026-03-07 00:59:52.337520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:52.337531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:52.337541 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:52.337552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:52.337820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:52.337866 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:52.337917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:52.337971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:52.337992 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:52.338008 | orchestrator | 2026-03-07 00:59:52.338111 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-07 00:59:52.338130 | orchestrator | Saturday 07 March 2026 00:56:54 +0000 (0:00:01.165) 0:00:09.326 ******** 2026-03-07 00:59:52.338146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:52.338207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:52.338224 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:52.338241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:52.338265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:52.338282 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:52.338297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:52.338340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:52.338358 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:52.338375 | orchestrator | 2026-03-07 00:59:52.338392 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-07 00:59:52.338408 | orchestrator | Saturday 07 March 2026 00:56:56 +0000 (0:00:01.153) 0:00:10.479 ******** 2026-03-07 00:59:52.338426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.338477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.338497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.338538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.338559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.338586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.338601 | orchestrator | 2026-03-07 00:59:52.338613 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-07 00:59:52.338624 | orchestrator | Saturday 07 March 2026 00:56:58 +0000 (0:00:02.563) 0:00:13.043 ******** 2026-03-07 00:59:52.338643 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:52.338654 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:52.338668 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:52.338685 | orchestrator | 2026-03-07 00:59:52.338701 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-07 00:59:52.338718 | orchestrator | Saturday 07 March 2026 00:57:02 +0000 (0:00:03.650) 0:00:16.694 ******** 2026-03-07 00:59:52.338748 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:52.338766 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:52.338784 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:52.338800 | orchestrator | 2026-03-07 00:59:52.338816 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-07 00:59:52.338831 | orchestrator | Saturday 07 March 2026 00:57:04 +0000 (0:00:02.144) 0:00:18.839 ******** 2026-03-07 00:59:52.338842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.338862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'co2026-03-07 00:59:52 | INFO  | Task 69aa7610-06c7-47a5-8987-d81562fa3369 is in state SUCCESS 2026-03-07 00:59:52.338873 | orchestrator | ntainer_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.338937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:52.338955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.338975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.338994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:52.339005 | orchestrator | 2026-03-07 00:59:52.339015 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-07 00:59:52.339025 | orchestrator | Saturday 07 March 2026 00:57:06 +0000 (0:00:02.289) 0:00:21.128 ******** 2026-03-07 00:59:52.339034 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:52.339044 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:52.339053 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:52.339063 | orchestrator | 2026-03-07 00:59:52.339073 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-07 00:59:52.339083 | orchestrator | Saturday 07 March 2026 00:57:07 +0000 (0:00:00.528) 0:00:21.657 ******** 2026-03-07 00:59:52.339092 | orchestrator | 2026-03-07 00:59:52.339102 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-07 00:59:52.339112 | orchestrator | Saturday 07 March 2026 00:57:07 +0000 (0:00:00.074) 0:00:21.731 ******** 2026-03-07 00:59:52.339121 | orchestrator | 2026-03-07 00:59:52.339131 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-07 00:59:52.339141 | orchestrator | Saturday 07 March 2026 00:57:07 +0000 (0:00:00.076) 0:00:21.808 ******** 2026-03-07 00:59:52.339150 | orchestrator | 2026-03-07 00:59:52.339160 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-07 00:59:52.339179 | orchestrator | Saturday 07 March 2026 00:57:07 +0000 (0:00:00.091) 0:00:21.900 ******** 2026-03-07 00:59:52.339189 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:52.339198 | orchestrator | 2026-03-07 00:59:52.339208 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-07 00:59:52.339218 | orchestrator | Saturday 07 March 2026 00:57:08 +0000 (0:00:00.765) 0:00:22.665 ******** 2026-03-07 00:59:52.339228 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:52.339237 | orchestrator | 2026-03-07 00:59:52.339247 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-07 00:59:52.339257 | orchestrator | Saturday 07 March 2026 00:57:08 +0000 (0:00:00.233) 0:00:22.899 ******** 2026-03-07 00:59:52.339266 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:52.339276 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:52.339290 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:52.339300 | orchestrator | 2026-03-07 00:59:52.339310 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-07 00:59:52.339320 | orchestrator | Saturday 07 March 2026 00:58:11 +0000 (0:01:02.813) 0:01:25.712 ******** 2026-03-07 00:59:52.339330 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:52.339339 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:52.339349 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:52.339359 | orchestrator | 2026-03-07 00:59:52.339368 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-07 00:59:52.339378 | orchestrator | Saturday 07 March 2026 00:59:36 +0000 (0:01:25.001) 0:02:50.714 ******** 2026-03-07 00:59:52.339388 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:52.339398 | orchestrator | 2026-03-07 00:59:52.339407 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-07 00:59:52.339417 | orchestrator | Saturday 07 March 2026 00:59:37 +0000 (0:00:00.805) 0:02:51.520 ******** 2026-03-07 00:59:52.339427 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:52.339437 | orchestrator | 2026-03-07 00:59:52.339447 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-07 00:59:52.339456 | orchestrator | Saturday 07 March 2026 00:59:39 +0000 (0:00:02.565) 0:02:54.085 ******** 2026-03-07 00:59:52.339466 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:52.339475 | orchestrator | 2026-03-07 00:59:52.339485 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-07 00:59:52.339494 | orchestrator | Saturday 07 March 2026 00:59:41 +0000 (0:00:02.193) 0:02:56.279 ******** 2026-03-07 00:59:52.339504 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:52.339514 | orchestrator | 2026-03-07 00:59:52.339523 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-07 00:59:52.339533 | orchestrator | Saturday 07 March 2026 00:59:44 +0000 (0:00:02.230) 0:02:58.509 ******** 2026-03-07 00:59:52.339543 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:52.339552 | orchestrator | 2026-03-07 00:59:52.339562 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-07 00:59:52.339572 | orchestrator | Saturday 07 March 2026 00:59:46 +0000 (0:00:02.802) 0:03:01.312 ******** 2026-03-07 00:59:52.339581 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:52.339591 | orchestrator | 2026-03-07 00:59:52.339600 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:59:52.339611 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 00:59:52.339629 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:59:52.339639 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:59:52.339656 | orchestrator | 2026-03-07 00:59:52.339666 | orchestrator | 2026-03-07 00:59:52.339676 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:59:52.339685 | orchestrator | Saturday 07 March 2026 00:59:49 +0000 (0:00:02.479) 0:03:03.792 ******** 2026-03-07 00:59:52.339695 | orchestrator | =============================================================================== 2026-03-07 00:59:52.339704 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 85.00s 2026-03-07 00:59:52.339714 | orchestrator | opensearch : Restart opensearch container ------------------------------ 62.81s 2026-03-07 00:59:52.339724 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.65s 2026-03-07 00:59:52.339733 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.00s 2026-03-07 00:59:52.339743 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.80s 2026-03-07 00:59:52.339752 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.57s 2026-03-07 00:59:52.339762 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.56s 2026-03-07 00:59:52.339772 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.48s 2026-03-07 00:59:52.339781 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.29s 2026-03-07 00:59:52.339791 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.23s 2026-03-07 00:59:52.339800 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.19s 2026-03-07 00:59:52.339810 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.14s 2026-03-07 00:59:52.339819 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.08s 2026-03-07 00:59:52.339829 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.17s 2026-03-07 00:59:52.339839 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.15s 2026-03-07 00:59:52.339848 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.80s 2026-03-07 00:59:52.339858 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.77s 2026-03-07 00:59:52.339868 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.69s 2026-03-07 00:59:52.339893 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.66s 2026-03-07 00:59:52.339903 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.61s 2026-03-07 00:59:52.339922 | orchestrator | 2026-03-07 00:59:52 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:52.339933 | orchestrator | 2026-03-07 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:55.381655 | orchestrator | 2026-03-07 00:59:55 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:55.384229 | orchestrator | 2026-03-07 00:59:55 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:55.384911 | orchestrator | 2026-03-07 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:58.430855 | orchestrator | 2026-03-07 00:59:58 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 00:59:58.431370 | orchestrator | 2026-03-07 00:59:58 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 00:59:58.431433 | orchestrator | 2026-03-07 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:01.478283 | orchestrator | 2026-03-07 01:00:01 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:01.481070 | orchestrator | 2026-03-07 01:00:01 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state STARTED 2026-03-07 01:00:01.481155 | orchestrator | 2026-03-07 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:04.538690 | orchestrator | 2026-03-07 01:00:04 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:04.538768 | orchestrator | 2026-03-07 01:00:04 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:04.540433 | orchestrator | 2026-03-07 01:00:04 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:04.543830 | orchestrator | 2026-03-07 01:00:04 | INFO  | Task 476aa002-fef5-421c-a958-975f3289671b is in state SUCCESS 2026-03-07 01:00:04.545109 | orchestrator | 2026-03-07 01:00:04.545145 | orchestrator | 2026-03-07 01:00:04.545151 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-07 01:00:04.545157 | orchestrator | 2026-03-07 01:00:04.545162 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-07 01:00:04.545168 | orchestrator | Saturday 07 March 2026 00:56:45 +0000 (0:00:00.124) 0:00:00.124 ******** 2026-03-07 01:00:04.545173 | orchestrator | ok: [localhost] => { 2026-03-07 01:00:04.545180 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-07 01:00:04.545185 | orchestrator | } 2026-03-07 01:00:04.545190 | orchestrator | 2026-03-07 01:00:04.545196 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-07 01:00:04.545201 | orchestrator | Saturday 07 March 2026 00:56:45 +0000 (0:00:00.046) 0:00:00.171 ******** 2026-03-07 01:00:04.545250 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-07 01:00:04.545257 | orchestrator | ...ignoring 2026-03-07 01:00:04.545264 | orchestrator | 2026-03-07 01:00:04.545272 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-07 01:00:04.545411 | orchestrator | Saturday 07 March 2026 00:56:48 +0000 (0:00:03.110) 0:00:03.281 ******** 2026-03-07 01:00:04.545420 | orchestrator | skipping: [localhost] 2026-03-07 01:00:04.545425 | orchestrator | 2026-03-07 01:00:04.545433 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-07 01:00:04.545439 | orchestrator | Saturday 07 March 2026 00:56:48 +0000 (0:00:00.089) 0:00:03.371 ******** 2026-03-07 01:00:04.545444 | orchestrator | ok: [localhost] 2026-03-07 01:00:04.545449 | orchestrator | 2026-03-07 01:00:04.545454 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:00:04.545458 | orchestrator | 2026-03-07 01:00:04.545463 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:00:04.545468 | orchestrator | Saturday 07 March 2026 00:56:49 +0000 (0:00:00.259) 0:00:03.631 ******** 2026-03-07 01:00:04.545473 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:04.545477 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:04.545482 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:04.545486 | orchestrator | 2026-03-07 01:00:04.545491 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:00:04.545496 | orchestrator | Saturday 07 March 2026 00:56:49 +0000 (0:00:00.416) 0:00:04.048 ******** 2026-03-07 01:00:04.545500 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-07 01:00:04.545505 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-07 01:00:04.545510 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-07 01:00:04.545514 | orchestrator | 2026-03-07 01:00:04.545519 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-07 01:00:04.545523 | orchestrator | 2026-03-07 01:00:04.545528 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-07 01:00:04.545533 | orchestrator | Saturday 07 March 2026 00:56:50 +0000 (0:00:00.687) 0:00:04.736 ******** 2026-03-07 01:00:04.545537 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-07 01:00:04.545544 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-07 01:00:04.545551 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-07 01:00:04.545581 | orchestrator | 2026-03-07 01:00:04.545588 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-07 01:00:04.545596 | orchestrator | Saturday 07 March 2026 00:56:50 +0000 (0:00:00.520) 0:00:05.256 ******** 2026-03-07 01:00:04.545617 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:00:04.545625 | orchestrator | 2026-03-07 01:00:04.545632 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-07 01:00:04.545640 | orchestrator | Saturday 07 March 2026 00:56:51 +0000 (0:00:00.759) 0:00:06.016 ******** 2026-03-07 01:00:04.545665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:04.545676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:04.545696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:04.545705 | orchestrator | 2026-03-07 01:00:04.545719 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-07 01:00:04.545725 | orchestrator | Saturday 07 March 2026 00:56:54 +0000 (0:00:03.555) 0:00:09.571 ******** 2026-03-07 01:00:04.545732 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.545741 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.545748 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.545755 | orchestrator | 2026-03-07 01:00:04.545761 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-07 01:00:04.545769 | orchestrator | Saturday 07 March 2026 00:56:55 +0000 (0:00:00.939) 0:00:10.511 ******** 2026-03-07 01:00:04.545776 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.545783 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.545791 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.545799 | orchestrator | 2026-03-07 01:00:04.545806 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-07 01:00:04.545813 | orchestrator | Saturday 07 March 2026 00:56:57 +0000 (0:00:01.746) 0:00:12.257 ******** 2026-03-07 01:00:04.545823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:04.545842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:04.545848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:04.545858 | orchestrator | 2026-03-07 01:00:04.545863 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-07 01:00:04.545868 | orchestrator | Saturday 07 March 2026 00:57:02 +0000 (0:00:05.038) 0:00:17.295 ******** 2026-03-07 01:00:04.545872 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.545877 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.545881 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.545913 | orchestrator | 2026-03-07 01:00:04.545918 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-07 01:00:04.545923 | orchestrator | Saturday 07 March 2026 00:57:03 +0000 (0:00:01.172) 0:00:18.468 ******** 2026-03-07 01:00:04.545927 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.545935 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:00:04.545939 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:00:04.545944 | orchestrator | 2026-03-07 01:00:04.545948 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-07 01:00:04.545953 | orchestrator | Saturday 07 March 2026 00:57:08 +0000 (0:00:04.517) 0:00:22.986 ******** 2026-03-07 01:00:04.545958 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:00:04.545963 | orchestrator | 2026-03-07 01:00:04.545967 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-07 01:00:04.545974 | orchestrator | Saturday 07 March 2026 00:57:08 +0000 (0:00:00.572) 0:00:23.558 ******** 2026-03-07 01:00:04.545990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:04.546002 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:04.546060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:04.546072 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.546087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:04.546097 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.546104 | orchestrator | 2026-03-07 01:00:04.546112 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-07 01:00:04.546120 | orchestrator | Saturday 07 March 2026 00:57:12 +0000 (0:00:03.557) 0:00:27.116 ******** 2026-03-07 01:00:04.546128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:04.546142 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:04.546160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:04.546170 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.546178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:04.546195 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.546201 | orchestrator | 2026-03-07 01:00:04.546206 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-07 01:00:04.546212 | orchestrator | Saturday 07 March 2026 00:57:16 +0000 (0:00:03.515) 0:00:30.631 ******** 2026-03-07 01:00:04.546221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:04.546227 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.546238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:04.546249 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.546258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:04.546264 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:04.546270 | orchestrator | 2026-03-07 01:00:04.546275 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-07 01:00:04.546281 | orchestrator | Saturday 07 March 2026 00:57:20 +0000 (0:00:04.186) 0:00:34.818 ******** 2026-03-07 01:00:04.546292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:04.546306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:04.546318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:04.546331 | orchestrator | 2026-03-07 01:00:04.546337 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-07 01:00:04.546343 | orchestrator | Saturday 07 March 2026 00:57:23 +0000 (0:00:03.421) 0:00:38.240 ******** 2026-03-07 01:00:04.546348 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.546353 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:00:04.546359 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:00:04.546364 | orchestrator | 2026-03-07 01:00:04.546369 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-07 01:00:04.546375 | orchestrator | Saturday 07 March 2026 00:57:24 +0000 (0:00:00.904) 0:00:39.144 ******** 2026-03-07 01:00:04.546381 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:04.546387 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:04.546392 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:04.546398 | orchestrator | 2026-03-07 01:00:04.546403 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-07 01:00:04.546409 | orchestrator | Saturday 07 March 2026 00:57:25 +0000 (0:00:00.513) 0:00:39.657 ******** 2026-03-07 01:00:04.546414 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:04.546420 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:04.546426 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:04.546432 | orchestrator | 2026-03-07 01:00:04.546437 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-07 01:00:04.546442 | orchestrator | Saturday 07 March 2026 00:57:25 +0000 (0:00:00.361) 0:00:40.019 ******** 2026-03-07 01:00:04.546448 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-07 01:00:04.546456 | orchestrator | ...ignoring 2026-03-07 01:00:04.546464 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-07 01:00:04.546471 | orchestrator | ...ignoring 2026-03-07 01:00:04.546483 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-07 01:00:04.546491 | orchestrator | ...ignoring 2026-03-07 01:00:04.546499 | orchestrator | 2026-03-07 01:00:04.546507 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-07 01:00:04.546514 | orchestrator | Saturday 07 March 2026 00:57:36 +0000 (0:00:10.892) 0:00:50.911 ******** 2026-03-07 01:00:04.546521 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:04.546526 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:04.546531 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:04.546535 | orchestrator | 2026-03-07 01:00:04.546540 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-07 01:00:04.546544 | orchestrator | Saturday 07 March 2026 00:57:36 +0000 (0:00:00.423) 0:00:51.334 ******** 2026-03-07 01:00:04.546549 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:04.546558 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.546563 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.546568 | orchestrator | 2026-03-07 01:00:04.546572 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-07 01:00:04.546577 | orchestrator | Saturday 07 March 2026 00:57:37 +0000 (0:00:00.694) 0:00:52.029 ******** 2026-03-07 01:00:04.546582 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:04.546586 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.546591 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.546595 | orchestrator | 2026-03-07 01:00:04.546600 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-07 01:00:04.546605 | orchestrator | Saturday 07 March 2026 00:57:37 +0000 (0:00:00.441) 0:00:52.470 ******** 2026-03-07 01:00:04.546609 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:04.546614 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.546622 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.546629 | orchestrator | 2026-03-07 01:00:04.546636 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-07 01:00:04.546643 | orchestrator | Saturday 07 March 2026 00:57:38 +0000 (0:00:00.432) 0:00:52.902 ******** 2026-03-07 01:00:04.546650 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:04.546658 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:04.546666 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:04.546674 | orchestrator | 2026-03-07 01:00:04.546681 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-07 01:00:04.546689 | orchestrator | Saturday 07 March 2026 00:57:38 +0000 (0:00:00.412) 0:00:53.315 ******** 2026-03-07 01:00:04.546699 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:04.546704 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.546709 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.546713 | orchestrator | 2026-03-07 01:00:04.546718 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-07 01:00:04.546722 | orchestrator | Saturday 07 March 2026 00:57:39 +0000 (0:00:00.585) 0:00:53.900 ******** 2026-03-07 01:00:04.546727 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.546731 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.546736 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-07 01:00:04.546740 | orchestrator | 2026-03-07 01:00:04.546745 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-07 01:00:04.546750 | orchestrator | Saturday 07 March 2026 00:57:39 +0000 (0:00:00.363) 0:00:54.263 ******** 2026-03-07 01:00:04.546754 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.546759 | orchestrator | 2026-03-07 01:00:04.546763 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-07 01:00:04.546768 | orchestrator | Saturday 07 March 2026 00:57:50 +0000 (0:00:10.430) 0:01:04.694 ******** 2026-03-07 01:00:04.546772 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:04.546777 | orchestrator | 2026-03-07 01:00:04.546781 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-07 01:00:04.546786 | orchestrator | Saturday 07 March 2026 00:57:50 +0000 (0:00:00.144) 0:01:04.838 ******** 2026-03-07 01:00:04.546791 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:04.546795 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.546800 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.546805 | orchestrator | 2026-03-07 01:00:04.546810 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-07 01:00:04.546814 | orchestrator | Saturday 07 March 2026 00:57:51 +0000 (0:00:01.185) 0:01:06.024 ******** 2026-03-07 01:00:04.546819 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.546823 | orchestrator | 2026-03-07 01:00:04.546828 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-07 01:00:04.546832 | orchestrator | Saturday 07 March 2026 00:58:00 +0000 (0:00:09.117) 0:01:15.141 ******** 2026-03-07 01:00:04.546843 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:04.546848 | orchestrator | 2026-03-07 01:00:04.546852 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-07 01:00:04.546857 | orchestrator | Saturday 07 March 2026 00:58:03 +0000 (0:00:02.600) 0:01:17.742 ******** 2026-03-07 01:00:04.546861 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:04.546866 | orchestrator | 2026-03-07 01:00:04.546870 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-07 01:00:04.546875 | orchestrator | Saturday 07 March 2026 00:58:06 +0000 (0:00:02.879) 0:01:20.621 ******** 2026-03-07 01:00:04.546879 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.546933 | orchestrator | 2026-03-07 01:00:04.546940 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-07 01:00:04.546944 | orchestrator | Saturday 07 March 2026 00:58:06 +0000 (0:00:00.150) 0:01:20.771 ******** 2026-03-07 01:00:04.546949 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:04.546954 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.546958 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.546962 | orchestrator | 2026-03-07 01:00:04.546967 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-07 01:00:04.546972 | orchestrator | Saturday 07 March 2026 00:58:06 +0000 (0:00:00.375) 0:01:21.147 ******** 2026-03-07 01:00:04.546976 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:04.546981 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:00:04.546985 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:00:04.546994 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-07 01:00:04.546999 | orchestrator | 2026-03-07 01:00:04.547003 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-07 01:00:04.547008 | orchestrator | skipping: no hosts matched 2026-03-07 01:00:04.547012 | orchestrator | 2026-03-07 01:00:04.547017 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-07 01:00:04.547022 | orchestrator | 2026-03-07 01:00:04.547027 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-07 01:00:04.547031 | orchestrator | Saturday 07 March 2026 00:58:07 +0000 (0:00:00.685) 0:01:21.832 ******** 2026-03-07 01:00:04.547036 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:00:04.547041 | orchestrator | 2026-03-07 01:00:04.547045 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-07 01:00:04.547050 | orchestrator | Saturday 07 March 2026 00:58:26 +0000 (0:00:19.691) 0:01:41.523 ******** 2026-03-07 01:00:04.547055 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-03-07 01:00:04.547060 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:04.547064 | orchestrator | 2026-03-07 01:00:04.547069 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-07 01:00:04.547073 | orchestrator | Saturday 07 March 2026 00:58:43 +0000 (0:00:16.248) 0:01:57.772 ******** 2026-03-07 01:00:04.547078 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:04.547082 | orchestrator | 2026-03-07 01:00:04.547087 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-07 01:00:04.547091 | orchestrator | 2026-03-07 01:00:04.547096 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-07 01:00:04.547100 | orchestrator | Saturday 07 March 2026 00:58:46 +0000 (0:00:02.897) 0:02:00.669 ******** 2026-03-07 01:00:04.547105 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:00:04.547109 | orchestrator | 2026-03-07 01:00:04.547114 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-07 01:00:04.547118 | orchestrator | Saturday 07 March 2026 00:59:06 +0000 (0:00:19.991) 0:02:20.661 ******** 2026-03-07 01:00:04.547123 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:04.547127 | orchestrator | 2026-03-07 01:00:04.547132 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-07 01:00:04.547137 | orchestrator | Saturday 07 March 2026 00:59:22 +0000 (0:00:16.592) 0:02:37.253 ******** 2026-03-07 01:00:04.547151 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:04.547160 | orchestrator | 2026-03-07 01:00:04.547167 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-07 01:00:04.547175 | orchestrator | 2026-03-07 01:00:04.547183 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-07 01:00:04.547191 | orchestrator | Saturday 07 March 2026 00:59:25 +0000 (0:00:02.691) 0:02:39.945 ******** 2026-03-07 01:00:04.547199 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.547207 | orchestrator | 2026-03-07 01:00:04.547215 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-07 01:00:04.547222 | orchestrator | Saturday 07 March 2026 00:59:43 +0000 (0:00:18.543) 0:02:58.488 ******** 2026-03-07 01:00:04.547229 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:04.547236 | orchestrator | 2026-03-07 01:00:04.547244 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-07 01:00:04.547252 | orchestrator | Saturday 07 March 2026 00:59:44 +0000 (0:00:00.621) 0:02:59.110 ******** 2026-03-07 01:00:04.547259 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:04.547267 | orchestrator | 2026-03-07 01:00:04.547276 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-07 01:00:04.547284 | orchestrator | 2026-03-07 01:00:04.547292 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-07 01:00:04.547300 | orchestrator | Saturday 07 March 2026 00:59:47 +0000 (0:00:03.027) 0:03:02.137 ******** 2026-03-07 01:00:04.547308 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:00:04.547313 | orchestrator | 2026-03-07 01:00:04.547317 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-07 01:00:04.547322 | orchestrator | Saturday 07 March 2026 00:59:48 +0000 (0:00:00.579) 0:03:02.717 ******** 2026-03-07 01:00:04.547326 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.547331 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.547335 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.547340 | orchestrator | 2026-03-07 01:00:04.547344 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-07 01:00:04.547349 | orchestrator | Saturday 07 March 2026 00:59:50 +0000 (0:00:02.462) 0:03:05.180 ******** 2026-03-07 01:00:04.547353 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.547358 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.547362 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.547367 | orchestrator | 2026-03-07 01:00:04.547371 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-07 01:00:04.547376 | orchestrator | Saturday 07 March 2026 00:59:53 +0000 (0:00:02.465) 0:03:07.645 ******** 2026-03-07 01:00:04.547380 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.547385 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.547390 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.547398 | orchestrator | 2026-03-07 01:00:04.547405 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-07 01:00:04.547412 | orchestrator | Saturday 07 March 2026 00:59:55 +0000 (0:00:02.253) 0:03:09.899 ******** 2026-03-07 01:00:04.547419 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.547427 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.547435 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:04.547441 | orchestrator | 2026-03-07 01:00:04.547449 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-07 01:00:04.547457 | orchestrator | Saturday 07 March 2026 00:59:57 +0000 (0:00:02.179) 0:03:12.078 ******** 2026-03-07 01:00:04.547465 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:04.547473 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:04.547480 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:04.547489 | orchestrator | 2026-03-07 01:00:04.547496 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-07 01:00:04.547508 | orchestrator | Saturday 07 March 2026 01:00:01 +0000 (0:00:03.602) 0:03:15.680 ******** 2026-03-07 01:00:04.547524 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:04.547532 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:04.547539 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:04.547547 | orchestrator | 2026-03-07 01:00:04.547552 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:00:04.547557 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-07 01:00:04.547563 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-07 01:00:04.547569 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-07 01:00:04.547574 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-07 01:00:04.547578 | orchestrator | 2026-03-07 01:00:04.547583 | orchestrator | 2026-03-07 01:00:04.547588 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:00:04.547592 | orchestrator | Saturday 07 March 2026 01:00:01 +0000 (0:00:00.243) 0:03:15.923 ******** 2026-03-07 01:00:04.547597 | orchestrator | =============================================================================== 2026-03-07 01:00:04.547601 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 39.68s 2026-03-07 01:00:04.547606 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.84s 2026-03-07 01:00:04.547610 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 18.54s 2026-03-07 01:00:04.547615 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2026-03-07 01:00:04.547620 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.43s 2026-03-07 01:00:04.547628 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 9.12s 2026-03-07 01:00:04.547633 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.59s 2026-03-07 01:00:04.547637 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.04s 2026-03-07 01:00:04.547642 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.52s 2026-03-07 01:00:04.547647 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.19s 2026-03-07 01:00:04.547651 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.60s 2026-03-07 01:00:04.547656 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.56s 2026-03-07 01:00:04.547660 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.56s 2026-03-07 01:00:04.547665 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.51s 2026-03-07 01:00:04.547670 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.42s 2026-03-07 01:00:04.547675 | orchestrator | Check MariaDB service --------------------------------------------------- 3.11s 2026-03-07 01:00:04.547679 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.03s 2026-03-07 01:00:04.547684 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.88s 2026-03-07 01:00:04.547689 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.60s 2026-03-07 01:00:04.547693 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.47s 2026-03-07 01:00:04.547698 | orchestrator | 2026-03-07 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:07.597176 | orchestrator | 2026-03-07 01:00:07 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:07.597692 | orchestrator | 2026-03-07 01:00:07 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:07.599386 | orchestrator | 2026-03-07 01:00:07 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:07.599425 | orchestrator | 2026-03-07 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:10.642268 | orchestrator | 2026-03-07 01:00:10 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:10.643755 | orchestrator | 2026-03-07 01:00:10 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:10.646945 | orchestrator | 2026-03-07 01:00:10 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:10.647091 | orchestrator | 2026-03-07 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:13.693772 | orchestrator | 2026-03-07 01:00:13 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:13.696452 | orchestrator | 2026-03-07 01:00:13 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:13.698993 | orchestrator | 2026-03-07 01:00:13 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:13.699049 | orchestrator | 2026-03-07 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:16.752467 | orchestrator | 2026-03-07 01:00:16 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:16.753229 | orchestrator | 2026-03-07 01:00:16 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:16.756317 | orchestrator | 2026-03-07 01:00:16 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:16.756426 | orchestrator | 2026-03-07 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:19.793020 | orchestrator | 2026-03-07 01:00:19 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:19.794690 | orchestrator | 2026-03-07 01:00:19 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:19.794757 | orchestrator | 2026-03-07 01:00:19 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:19.794772 | orchestrator | 2026-03-07 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:22.831750 | orchestrator | 2026-03-07 01:00:22 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:22.831843 | orchestrator | 2026-03-07 01:00:22 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:22.833543 | orchestrator | 2026-03-07 01:00:22 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:22.833560 | orchestrator | 2026-03-07 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:25.875642 | orchestrator | 2026-03-07 01:00:25 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:25.876507 | orchestrator | 2026-03-07 01:00:25 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:25.877679 | orchestrator | 2026-03-07 01:00:25 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:25.877720 | orchestrator | 2026-03-07 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:28.915477 | orchestrator | 2026-03-07 01:00:28 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:28.917064 | orchestrator | 2026-03-07 01:00:28 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:28.918830 | orchestrator | 2026-03-07 01:00:28 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:28.918868 | orchestrator | 2026-03-07 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:31.951692 | orchestrator | 2026-03-07 01:00:31 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:31.953092 | orchestrator | 2026-03-07 01:00:31 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:31.954524 | orchestrator | 2026-03-07 01:00:31 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:31.954582 | orchestrator | 2026-03-07 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:35.002110 | orchestrator | 2026-03-07 01:00:34 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:35.002648 | orchestrator | 2026-03-07 01:00:34 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:35.005996 | orchestrator | 2026-03-07 01:00:35 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:35.006100 | orchestrator | 2026-03-07 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:38.048092 | orchestrator | 2026-03-07 01:00:38 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:38.049595 | orchestrator | 2026-03-07 01:00:38 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:38.051935 | orchestrator | 2026-03-07 01:00:38 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:38.053671 | orchestrator | 2026-03-07 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:41.088388 | orchestrator | 2026-03-07 01:00:41 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:41.091362 | orchestrator | 2026-03-07 01:00:41 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:41.093539 | orchestrator | 2026-03-07 01:00:41 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:41.093624 | orchestrator | 2026-03-07 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:44.143442 | orchestrator | 2026-03-07 01:00:44 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:44.144335 | orchestrator | 2026-03-07 01:00:44 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:44.145897 | orchestrator | 2026-03-07 01:00:44 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:44.146326 | orchestrator | 2026-03-07 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:47.185078 | orchestrator | 2026-03-07 01:00:47 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:47.186806 | orchestrator | 2026-03-07 01:00:47 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:47.188252 | orchestrator | 2026-03-07 01:00:47 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:47.188369 | orchestrator | 2026-03-07 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:50.236294 | orchestrator | 2026-03-07 01:00:50 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:50.239531 | orchestrator | 2026-03-07 01:00:50 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:50.241648 | orchestrator | 2026-03-07 01:00:50 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:50.241730 | orchestrator | 2026-03-07 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:53.289027 | orchestrator | 2026-03-07 01:00:53 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:53.290902 | orchestrator | 2026-03-07 01:00:53 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:53.294404 | orchestrator | 2026-03-07 01:00:53 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:53.294487 | orchestrator | 2026-03-07 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:56.330629 | orchestrator | 2026-03-07 01:00:56 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:56.332983 | orchestrator | 2026-03-07 01:00:56 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:56.335225 | orchestrator | 2026-03-07 01:00:56 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:56.335293 | orchestrator | 2026-03-07 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:59.378331 | orchestrator | 2026-03-07 01:00:59 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:00:59.379050 | orchestrator | 2026-03-07 01:00:59 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:00:59.380565 | orchestrator | 2026-03-07 01:00:59 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:00:59.380897 | orchestrator | 2026-03-07 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:02.409907 | orchestrator | 2026-03-07 01:01:02 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:02.411481 | orchestrator | 2026-03-07 01:01:02 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:01:02.414111 | orchestrator | 2026-03-07 01:01:02 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:02.414772 | orchestrator | 2026-03-07 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:05.459363 | orchestrator | 2026-03-07 01:01:05 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:05.460880 | orchestrator | 2026-03-07 01:01:05 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:01:05.464044 | orchestrator | 2026-03-07 01:01:05 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:05.464103 | orchestrator | 2026-03-07 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:08.509877 | orchestrator | 2026-03-07 01:01:08 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:08.513035 | orchestrator | 2026-03-07 01:01:08 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:01:08.515978 | orchestrator | 2026-03-07 01:01:08 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:08.516053 | orchestrator | 2026-03-07 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:11.562447 | orchestrator | 2026-03-07 01:01:11 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:11.563628 | orchestrator | 2026-03-07 01:01:11 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state STARTED 2026-03-07 01:01:11.564817 | orchestrator | 2026-03-07 01:01:11 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:11.564911 | orchestrator | 2026-03-07 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:14.616527 | orchestrator | 2026-03-07 01:01:14 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:14.625120 | orchestrator | 2026-03-07 01:01:14 | INFO  | Task 8f335aab-7019-4a86-a41c-19d160186bd9 is in state SUCCESS 2026-03-07 01:01:14.626446 | orchestrator | 2026-03-07 01:01:14.626513 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-07 01:01:14.626538 | orchestrator | 2.16.14 2026-03-07 01:01:14.626560 | orchestrator | 2026-03-07 01:01:14.626579 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-07 01:01:14.626600 | orchestrator | 2026-03-07 01:01:14.626621 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-07 01:01:14.626642 | orchestrator | Saturday 07 March 2026 00:59:00 +0000 (0:00:00.694) 0:00:00.694 ******** 2026-03-07 01:01:14.626661 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:01:14.626682 | orchestrator | 2026-03-07 01:01:14.626959 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-07 01:01:14.626990 | orchestrator | Saturday 07 March 2026 00:59:01 +0000 (0:00:00.719) 0:00:01.414 ******** 2026-03-07 01:01:14.627014 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.627035 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.627058 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.627079 | orchestrator | 2026-03-07 01:01:14.627101 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-07 01:01:14.627123 | orchestrator | Saturday 07 March 2026 00:59:01 +0000 (0:00:00.655) 0:00:02.070 ******** 2026-03-07 01:01:14.627143 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.627164 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.627184 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.627300 | orchestrator | 2026-03-07 01:01:14.627325 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-07 01:01:14.628288 | orchestrator | Saturday 07 March 2026 00:59:02 +0000 (0:00:00.320) 0:00:02.390 ******** 2026-03-07 01:01:14.628310 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.628328 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.628346 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.628365 | orchestrator | 2026-03-07 01:01:14.628383 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-07 01:01:14.628403 | orchestrator | Saturday 07 March 2026 00:59:03 +0000 (0:00:00.845) 0:00:03.236 ******** 2026-03-07 01:01:14.628422 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.628440 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.628458 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.628477 | orchestrator | 2026-03-07 01:01:14.628494 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-07 01:01:14.628512 | orchestrator | Saturday 07 March 2026 00:59:03 +0000 (0:00:00.337) 0:00:03.574 ******** 2026-03-07 01:01:14.628529 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.628546 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.628564 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.628581 | orchestrator | 2026-03-07 01:01:14.628601 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-07 01:01:14.628621 | orchestrator | Saturday 07 March 2026 00:59:03 +0000 (0:00:00.339) 0:00:03.913 ******** 2026-03-07 01:01:14.628640 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.628659 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.628678 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.628742 | orchestrator | 2026-03-07 01:01:14.628763 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-07 01:01:14.628781 | orchestrator | Saturday 07 March 2026 00:59:04 +0000 (0:00:00.328) 0:00:04.241 ******** 2026-03-07 01:01:14.628801 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.628820 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.628878 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.628899 | orchestrator | 2026-03-07 01:01:14.628996 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-07 01:01:14.629022 | orchestrator | Saturday 07 March 2026 00:59:04 +0000 (0:00:00.546) 0:00:04.788 ******** 2026-03-07 01:01:14.629041 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.629060 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.629078 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.629098 | orchestrator | 2026-03-07 01:01:14.629117 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-07 01:01:14.629136 | orchestrator | Saturday 07 March 2026 00:59:04 +0000 (0:00:00.322) 0:00:05.111 ******** 2026-03-07 01:01:14.629154 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 01:01:14.629172 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 01:01:14.629191 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 01:01:14.629211 | orchestrator | 2026-03-07 01:01:14.629230 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-07 01:01:14.629248 | orchestrator | Saturday 07 March 2026 00:59:05 +0000 (0:00:00.690) 0:00:05.801 ******** 2026-03-07 01:01:14.629266 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.629284 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.629303 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.629324 | orchestrator | 2026-03-07 01:01:14.629362 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-07 01:01:14.629382 | orchestrator | Saturday 07 March 2026 00:59:06 +0000 (0:00:00.457) 0:00:06.258 ******** 2026-03-07 01:01:14.629401 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 01:01:14.629420 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 01:01:14.629438 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 01:01:14.629456 | orchestrator | 2026-03-07 01:01:14.629473 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-07 01:01:14.629492 | orchestrator | Saturday 07 March 2026 00:59:08 +0000 (0:00:02.111) 0:00:08.369 ******** 2026-03-07 01:01:14.629511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-07 01:01:14.629529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-07 01:01:14.629547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-07 01:01:14.629565 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.629582 | orchestrator | 2026-03-07 01:01:14.629691 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-07 01:01:14.629713 | orchestrator | Saturday 07 March 2026 00:59:08 +0000 (0:00:00.691) 0:00:09.061 ******** 2026-03-07 01:01:14.629731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.629751 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.629768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.629786 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.629804 | orchestrator | 2026-03-07 01:01:14.629820 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-07 01:01:14.629837 | orchestrator | Saturday 07 March 2026 00:59:09 +0000 (0:00:00.891) 0:00:09.952 ******** 2026-03-07 01:01:14.629873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.629895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.629913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.629980 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.629999 | orchestrator | 2026-03-07 01:01:14.630054 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-07 01:01:14.630075 | orchestrator | Saturday 07 March 2026 00:59:10 +0000 (0:00:00.409) 0:00:10.362 ******** 2026-03-07 01:01:14.630100 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '45fe07f0b017', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-07 00:59:06.751848', 'end': '2026-03-07 00:59:06.794435', 'delta': '0:00:00.042587', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['45fe07f0b017'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-07 01:01:14.630114 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8660af6cd76b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-07 00:59:07.523366', 'end': '2026-03-07 00:59:07.568667', 'delta': '0:00:00.045301', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8660af6cd76b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-07 01:01:14.630172 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '670c29a4213b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-07 00:59:08.007042', 'end': '2026-03-07 00:59:08.060246', 'delta': '0:00:00.053204', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['670c29a4213b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-07 01:01:14.630195 | orchestrator | 2026-03-07 01:01:14.630205 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-07 01:01:14.630215 | orchestrator | Saturday 07 March 2026 00:59:10 +0000 (0:00:00.203) 0:00:10.565 ******** 2026-03-07 01:01:14.630225 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.630234 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.630244 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.630254 | orchestrator | 2026-03-07 01:01:14.630263 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-07 01:01:14.630273 | orchestrator | Saturday 07 March 2026 00:59:10 +0000 (0:00:00.479) 0:00:11.044 ******** 2026-03-07 01:01:14.630282 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-07 01:01:14.630292 | orchestrator | 2026-03-07 01:01:14.630302 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-07 01:01:14.630311 | orchestrator | Saturday 07 March 2026 00:59:12 +0000 (0:00:01.687) 0:00:12.732 ******** 2026-03-07 01:01:14.630321 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.630330 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.630340 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.630349 | orchestrator | 2026-03-07 01:01:14.630359 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-07 01:01:14.630369 | orchestrator | Saturday 07 March 2026 00:59:12 +0000 (0:00:00.374) 0:00:13.107 ******** 2026-03-07 01:01:14.630378 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.630388 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.630398 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.630407 | orchestrator | 2026-03-07 01:01:14.630417 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-07 01:01:14.630426 | orchestrator | Saturday 07 March 2026 00:59:13 +0000 (0:00:00.447) 0:00:13.554 ******** 2026-03-07 01:01:14.630436 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.630445 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.630455 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.630464 | orchestrator | 2026-03-07 01:01:14.630474 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-07 01:01:14.630484 | orchestrator | Saturday 07 March 2026 00:59:13 +0000 (0:00:00.510) 0:00:14.065 ******** 2026-03-07 01:01:14.630493 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.630502 | orchestrator | 2026-03-07 01:01:14.630512 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-07 01:01:14.630522 | orchestrator | Saturday 07 March 2026 00:59:14 +0000 (0:00:00.134) 0:00:14.200 ******** 2026-03-07 01:01:14.630531 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.630540 | orchestrator | 2026-03-07 01:01:14.630550 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-07 01:01:14.630560 | orchestrator | Saturday 07 March 2026 00:59:14 +0000 (0:00:00.247) 0:00:14.447 ******** 2026-03-07 01:01:14.630569 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.630579 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.630588 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.630597 | orchestrator | 2026-03-07 01:01:14.630607 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-07 01:01:14.630616 | orchestrator | Saturday 07 March 2026 00:59:14 +0000 (0:00:00.323) 0:00:14.770 ******** 2026-03-07 01:01:14.630626 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.630635 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.630645 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.630654 | orchestrator | 2026-03-07 01:01:14.630664 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-07 01:01:14.630673 | orchestrator | Saturday 07 March 2026 00:59:14 +0000 (0:00:00.346) 0:00:15.117 ******** 2026-03-07 01:01:14.630683 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.630692 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.630702 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.630717 | orchestrator | 2026-03-07 01:01:14.630732 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-07 01:01:14.630742 | orchestrator | Saturday 07 March 2026 00:59:15 +0000 (0:00:00.586) 0:00:15.703 ******** 2026-03-07 01:01:14.630752 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.630761 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.630771 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.630780 | orchestrator | 2026-03-07 01:01:14.630790 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-07 01:01:14.630799 | orchestrator | Saturday 07 March 2026 00:59:15 +0000 (0:00:00.356) 0:00:16.059 ******** 2026-03-07 01:01:14.630809 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.630819 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.630828 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.630838 | orchestrator | 2026-03-07 01:01:14.630847 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-07 01:01:14.630857 | orchestrator | Saturday 07 March 2026 00:59:16 +0000 (0:00:00.332) 0:00:16.392 ******** 2026-03-07 01:01:14.630867 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.630876 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.630885 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.630942 | orchestrator | 2026-03-07 01:01:14.630955 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-07 01:01:14.630965 | orchestrator | Saturday 07 March 2026 00:59:16 +0000 (0:00:00.344) 0:00:16.736 ******** 2026-03-07 01:01:14.630975 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.630984 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.630994 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.631003 | orchestrator | 2026-03-07 01:01:14.631013 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-07 01:01:14.631023 | orchestrator | Saturday 07 March 2026 00:59:17 +0000 (0:00:00.584) 0:00:17.321 ******** 2026-03-07 01:01:14.631034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9f941f3--03bb--56ef--8ac7--c30bc8004c51-osd--block--e9f941f3--03bb--56ef--8ac7--c30bc8004c51', 'dm-uuid-LVM-jiYuCfZIFFLLATdSqMWZs2byf2Hqw9KoUEwdOtxjfUj2xFbqUYee2AMaAjRqF8Gb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6cee2ec4--9e84--549b--8075--e81043ce518c-osd--block--6cee2ec4--9e84--549b--8075--e81043ce518c', 'dm-uuid-LVM-B8bLbepi7zk4LlUHWUoFcpgJuCxmaQP4j5OWt0ye3awuf5KvZzYB8ByFXsEb2OPh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c-osd--block--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c', 'dm-uuid-LVM-8XR3XmOVd2B8PVaNnTDqflfNiRw1uJKWO0Hm3UQTPEfuME0WHh0U21tmJ674G9e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part1', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part14', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part15', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part16', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50ec861c--6b17--5421--b6cb--257ea2a8b129-osd--block--50ec861c--6b17--5421--b6cb--257ea2a8b129', 'dm-uuid-LVM-ITJqhhsuUHE2k8u0ISlqfZTbYeEByERaXDaCZ04QKtCaLlQ7frKxmGzqlFMPE1RH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e9f941f3--03bb--56ef--8ac7--c30bc8004c51-osd--block--e9f941f3--03bb--56ef--8ac7--c30bc8004c51'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lq9jL1-3czA-ypLx-r35L-ph0k-iv5M-Tpn0zj', 'scsi-0QEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e', 'scsi-SQEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6cee2ec4--9e84--549b--8075--e81043ce518c-osd--block--6cee2ec4--9e84--549b--8075--e81043ce518c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7sfVzN-ghhm-9cSP-0Pq1-SUpz-oO0I-1m8yZK', 'scsi-0QEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd', 'scsi-SQEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da', 'scsi-SQEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631394 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.631411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part1', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part14', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part15', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part16', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c-osd--block--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kL3PrW-YKUZ-t2Rl-lXXg-ITpx-OegE-g2PFSL', 'scsi-0QEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15', 'scsi-SQEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--50ec861c--6b17--5421--b6cb--257ea2a8b129-osd--block--50ec861c--6b17--5421--b6cb--257ea2a8b129'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PC8ITn-KVPH-rj6x-YF4C-PrRN-sXNg-fVd1gi', 'scsi-0QEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359', 'scsi-SQEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e', 'scsi-SQEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f3e458ba--b75f--5cb4--a1c9--e61fe3486295-osd--block--f3e458ba--b75f--5cb4--a1c9--e61fe3486295', 'dm-uuid-LVM-VhQzIXCpqfcn5zxKE5r7ztI1fyiYqLzHEtYkvSf66TMZqdVR7ccCs8N8OfaPuyV8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631612 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.631629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5cfbeba1--5550--585b--8a7e--42a4921f8eca-osd--block--5cfbeba1--5550--585b--8a7e--42a4921f8eca', 'dm-uuid-LVM-Wq2OW3tsl6jTTaLcKmTav2JwTpRCFqU2JgsK1FfoH8ERtcQ22t3sS9NNXbRdzkph'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:14.631764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part1', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part14', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part15', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part16', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f3e458ba--b75f--5cb4--a1c9--e61fe3486295-osd--block--f3e458ba--b75f--5cb4--a1c9--e61fe3486295'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8FKuTE-n2yH-Ra88-Nh73-4mV7-nLrM-yos4UV', 'scsi-0QEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103', 'scsi-SQEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5cfbeba1--5550--585b--8a7e--42a4921f8eca-osd--block--5cfbeba1--5550--585b--8a7e--42a4921f8eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fAaf82-4o5V-tENd-n1vK-sRdp-WZdV-yCL7oe', 'scsi-0QEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc', 'scsi-SQEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d', 'scsi-SQEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:14.631835 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.631844 | orchestrator | 2026-03-07 01:01:14.631854 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-07 01:01:14.631864 | orchestrator | Saturday 07 March 2026 00:59:17 +0000 (0:00:00.599) 0:00:17.920 ******** 2026-03-07 01:01:14.631875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e9f941f3--03bb--56ef--8ac7--c30bc8004c51-osd--block--e9f941f3--03bb--56ef--8ac7--c30bc8004c51', 'dm-uuid-LVM-jiYuCfZIFFLLATdSqMWZs2byf2Hqw9KoUEwdOtxjfUj2xFbqUYee2AMaAjRqF8Gb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.631897 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6cee2ec4--9e84--549b--8075--e81043ce518c-osd--block--6cee2ec4--9e84--549b--8075--e81043ce518c', 'dm-uuid-LVM-B8bLbepi7zk4LlUHWUoFcpgJuCxmaQP4j5OWt0ye3awuf5KvZzYB8ByFXsEb2OPh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.631908 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.631987 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632007 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632054 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632075 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c-osd--block--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c', 'dm-uuid-LVM-8XR3XmOVd2B8PVaNnTDqflfNiRw1uJKWO0Hm3UQTPEfuME0WHh0U21tmJ674G9e9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632085 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50ec861c--6b17--5421--b6cb--257ea2a8b129-osd--block--50ec861c--6b17--5421--b6cb--257ea2a8b129', 'dm-uuid-LVM-ITJqhhsuUHE2k8u0ISlqfZTbYeEByERaXDaCZ04QKtCaLlQ7frKxmGzqlFMPE1RH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632127 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632137 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632157 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632167 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part1', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part14', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part15', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part16', 'scsi-SQEMU_QEMU_HARDDISK_448434c8-f92f-4dad-84e2-85ad64f4e35e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632204 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e9f941f3--03bb--56ef--8ac7--c30bc8004c51-osd--block--e9f941f3--03bb--56ef--8ac7--c30bc8004c51'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Lq9jL1-3czA-ypLx-r35L-ph0k-iv5M-Tpn0zj', 'scsi-0QEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e', 'scsi-SQEMU_QEMU_HARDDISK_6b3da8fe-8a9b-450a-9caf-2db14f74686e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632232 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6cee2ec4--9e84--549b--8075--e81043ce518c-osd--block--6cee2ec4--9e84--549b--8075--e81043ce518c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7sfVzN-ghhm-9cSP-0Pq1-SUpz-oO0I-1m8yZK', 'scsi-0QEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd', 'scsi-SQEMU_QEMU_HARDDISK_72259f68-e866-4719-b0ea-eb473e4fd6bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632263 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da', 'scsi-SQEMU_QEMU_HARDDISK_cc667673-5185-49c1-bb99-04f4fd4068da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632274 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632290 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632300 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632310 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632320 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.632345 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part1', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part14', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part15', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part16', 'scsi-SQEMU_QEMU_HARDDISK_420a1f40-7e0f-4106-8b14-3c7e5e75cad6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632365 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c-osd--block--c6d853cd--f8df--5f7f--ab25--9ac4f40a4d2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kL3PrW-YKUZ-t2Rl-lXXg-ITpx-OegE-g2PFSL', 'scsi-0QEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15', 'scsi-SQEMU_QEMU_HARDDISK_c95cdd10-84fe-4990-af41-f1a34ec8ee15'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632376 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--50ec861c--6b17--5421--b6cb--257ea2a8b129-osd--block--50ec861c--6b17--5421--b6cb--257ea2a8b129'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PC8ITn-KVPH-rj6x-YF4C-PrRN-sXNg-fVd1gi', 'scsi-0QEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359', 'scsi-SQEMU_QEMU_HARDDISK_aeae70bf-06ae-4bd4-b471-9be2a413b359'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f3e458ba--b75f--5cb4--a1c9--e61fe3486295-osd--block--f3e458ba--b75f--5cb4--a1c9--e61fe3486295', 'dm-uuid-LVM-VhQzIXCpqfcn5zxKE5r7ztI1fyiYqLzHEtYkvSf66TMZqdVR7ccCs8N8OfaPuyV8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632410 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e', 'scsi-SQEMU_QEMU_HARDDISK_9c38bee3-edc8-40af-8be7-576eb57a340e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632423 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5cfbeba1--5550--585b--8a7e--42a4921f8eca-osd--block--5cfbeba1--5550--585b--8a7e--42a4921f8eca', 'dm-uuid-LVM-Wq2OW3tsl6jTTaLcKmTav2JwTpRCFqU2JgsK1FfoH8ERtcQ22t3sS9NNXbRdzkph'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632432 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632440 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.632448 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632456 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632468 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632484 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632506 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632514 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632522 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part1', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part14', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part15', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part16', 'scsi-SQEMU_QEMU_HARDDISK_86fe1ccb-6d77-4e2c-ab6c-94ce433f2c86-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632555 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f3e458ba--b75f--5cb4--a1c9--e61fe3486295-osd--block--f3e458ba--b75f--5cb4--a1c9--e61fe3486295'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8FKuTE-n2yH-Ra88-Nh73-4mV7-nLrM-yos4UV', 'scsi-0QEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103', 'scsi-SQEMU_QEMU_HARDDISK_34b2d3d1-49da-433c-9475-894febcc7103'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5cfbeba1--5550--585b--8a7e--42a4921f8eca-osd--block--5cfbeba1--5550--585b--8a7e--42a4921f8eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fAaf82-4o5V-tENd-n1vK-sRdp-WZdV-yCL7oe', 'scsi-0QEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc', 'scsi-SQEMU_QEMU_HARDDISK_c20bba62-61d0-4a1a-9760-7959bbad95dc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632576 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d', 'scsi-SQEMU_QEMU_HARDDISK_56f8efd0-3f15-4df4-bf76-395b3326da9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632595 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:14.632604 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.632612 | orchestrator | 2026-03-07 01:01:14.632620 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-07 01:01:14.632628 | orchestrator | Saturday 07 March 2026 00:59:18 +0000 (0:00:00.665) 0:00:18.585 ******** 2026-03-07 01:01:14.632636 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.632644 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.632652 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.632660 | orchestrator | 2026-03-07 01:01:14.632667 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-07 01:01:14.632675 | orchestrator | Saturday 07 March 2026 00:59:19 +0000 (0:00:00.697) 0:00:19.282 ******** 2026-03-07 01:01:14.632683 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.632691 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.632699 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.632706 | orchestrator | 2026-03-07 01:01:14.632714 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-07 01:01:14.632722 | orchestrator | Saturday 07 March 2026 00:59:19 +0000 (0:00:00.596) 0:00:19.879 ******** 2026-03-07 01:01:14.632730 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.632738 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.632745 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.632753 | orchestrator | 2026-03-07 01:01:14.632761 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-07 01:01:14.632769 | orchestrator | Saturday 07 March 2026 00:59:20 +0000 (0:00:00.671) 0:00:20.551 ******** 2026-03-07 01:01:14.632776 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.632784 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.632792 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.632800 | orchestrator | 2026-03-07 01:01:14.632808 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-07 01:01:14.632816 | orchestrator | Saturday 07 March 2026 00:59:20 +0000 (0:00:00.381) 0:00:20.932 ******** 2026-03-07 01:01:14.632823 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.632831 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.632839 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.632846 | orchestrator | 2026-03-07 01:01:14.632854 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-07 01:01:14.632862 | orchestrator | Saturday 07 March 2026 00:59:21 +0000 (0:00:00.473) 0:00:21.406 ******** 2026-03-07 01:01:14.632870 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.632878 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.632885 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.632893 | orchestrator | 2026-03-07 01:01:14.632901 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-07 01:01:14.632909 | orchestrator | Saturday 07 March 2026 00:59:21 +0000 (0:00:00.586) 0:00:21.992 ******** 2026-03-07 01:01:14.632917 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-07 01:01:14.632940 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-07 01:01:14.632948 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-07 01:01:14.632956 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-07 01:01:14.632972 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-07 01:01:14.632979 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-07 01:01:14.632987 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-07 01:01:14.632995 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-07 01:01:14.633003 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-07 01:01:14.633011 | orchestrator | 2026-03-07 01:01:14.633019 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-07 01:01:14.633027 | orchestrator | Saturday 07 March 2026 00:59:22 +0000 (0:00:00.963) 0:00:22.955 ******** 2026-03-07 01:01:14.633034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-07 01:01:14.633042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-07 01:01:14.633050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-07 01:01:14.633058 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.633065 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-07 01:01:14.633073 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-07 01:01:14.633085 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-07 01:01:14.633093 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.633100 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-07 01:01:14.633108 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-07 01:01:14.633116 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-07 01:01:14.633124 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.633131 | orchestrator | 2026-03-07 01:01:14.633139 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-07 01:01:14.633147 | orchestrator | Saturday 07 March 2026 00:59:23 +0000 (0:00:00.400) 0:00:23.355 ******** 2026-03-07 01:01:14.633155 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:01:14.633163 | orchestrator | 2026-03-07 01:01:14.633171 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-07 01:01:14.633180 | orchestrator | Saturday 07 March 2026 00:59:23 +0000 (0:00:00.819) 0:00:24.175 ******** 2026-03-07 01:01:14.633193 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.633201 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.633209 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.633217 | orchestrator | 2026-03-07 01:01:14.633224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-07 01:01:14.633232 | orchestrator | Saturday 07 March 2026 00:59:24 +0000 (0:00:00.353) 0:00:24.529 ******** 2026-03-07 01:01:14.633240 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.633248 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.633256 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.633263 | orchestrator | 2026-03-07 01:01:14.633271 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-07 01:01:14.633279 | orchestrator | Saturday 07 March 2026 00:59:24 +0000 (0:00:00.355) 0:00:24.884 ******** 2026-03-07 01:01:14.633287 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.633295 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.633302 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:14.633310 | orchestrator | 2026-03-07 01:01:14.633318 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-07 01:01:14.633326 | orchestrator | Saturday 07 March 2026 00:59:25 +0000 (0:00:00.332) 0:00:25.217 ******** 2026-03-07 01:01:14.633334 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.633341 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.633349 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.633357 | orchestrator | 2026-03-07 01:01:14.633365 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-07 01:01:14.633379 | orchestrator | Saturday 07 March 2026 00:59:25 +0000 (0:00:00.708) 0:00:25.926 ******** 2026-03-07 01:01:14.633387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 01:01:14.633395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 01:01:14.633403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 01:01:14.633410 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.633418 | orchestrator | 2026-03-07 01:01:14.633426 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-07 01:01:14.633434 | orchestrator | Saturday 07 March 2026 00:59:26 +0000 (0:00:00.474) 0:00:26.400 ******** 2026-03-07 01:01:14.633441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 01:01:14.633449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 01:01:14.633457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 01:01:14.633465 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.633472 | orchestrator | 2026-03-07 01:01:14.633480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-07 01:01:14.633489 | orchestrator | Saturday 07 March 2026 00:59:26 +0000 (0:00:00.444) 0:00:26.845 ******** 2026-03-07 01:01:14.633496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 01:01:14.633504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 01:01:14.633512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 01:01:14.633520 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.633528 | orchestrator | 2026-03-07 01:01:14.633535 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-07 01:01:14.633543 | orchestrator | Saturday 07 March 2026 00:59:27 +0000 (0:00:00.424) 0:00:27.269 ******** 2026-03-07 01:01:14.633551 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:14.633558 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:14.633566 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:14.633574 | orchestrator | 2026-03-07 01:01:14.633582 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-07 01:01:14.633590 | orchestrator | Saturday 07 March 2026 00:59:27 +0000 (0:00:00.355) 0:00:27.625 ******** 2026-03-07 01:01:14.633597 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-07 01:01:14.633606 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-07 01:01:14.633613 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-07 01:01:14.633621 | orchestrator | 2026-03-07 01:01:14.633629 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-07 01:01:14.633637 | orchestrator | Saturday 07 March 2026 00:59:27 +0000 (0:00:00.534) 0:00:28.159 ******** 2026-03-07 01:01:14.633645 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 01:01:14.633653 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 01:01:14.633661 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 01:01:14.633668 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-07 01:01:14.633676 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-07 01:01:14.633688 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-07 01:01:14.633696 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-07 01:01:14.633704 | orchestrator | 2026-03-07 01:01:14.633712 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-07 01:01:14.633720 | orchestrator | Saturday 07 March 2026 00:59:29 +0000 (0:00:01.170) 0:00:29.329 ******** 2026-03-07 01:01:14.633728 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 01:01:14.633735 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 01:01:14.633749 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 01:01:14.633757 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-07 01:01:14.633765 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-07 01:01:14.633773 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-07 01:01:14.633785 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-07 01:01:14.633793 | orchestrator | 2026-03-07 01:01:14.633800 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-07 01:01:14.633808 | orchestrator | Saturday 07 March 2026 00:59:31 +0000 (0:00:02.345) 0:00:31.675 ******** 2026-03-07 01:01:14.633816 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:14.633824 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:14.633832 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-07 01:01:14.633839 | orchestrator | 2026-03-07 01:01:14.633847 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-07 01:01:14.633855 | orchestrator | Saturday 07 March 2026 00:59:31 +0000 (0:00:00.388) 0:00:32.063 ******** 2026-03-07 01:01:14.633864 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 01:01:14.633873 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 01:01:14.633883 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 01:01:14.633896 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 01:01:14.633910 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 01:01:14.633953 | orchestrator | 2026-03-07 01:01:14.633971 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-07 01:01:14.633983 | orchestrator | Saturday 07 March 2026 01:00:18 +0000 (0:00:46.821) 0:01:18.884 ******** 2026-03-07 01:01:14.633995 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634008 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634058 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634072 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634086 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634100 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634114 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-07 01:01:14.634128 | orchestrator | 2026-03-07 01:01:14.634142 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-07 01:01:14.634167 | orchestrator | Saturday 07 March 2026 01:00:43 +0000 (0:00:24.702) 0:01:43.587 ******** 2026-03-07 01:01:14.634182 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634195 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634204 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634212 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634225 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634233 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634241 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 01:01:14.634248 | orchestrator | 2026-03-07 01:01:14.634256 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-07 01:01:14.634264 | orchestrator | Saturday 07 March 2026 01:00:55 +0000 (0:00:12.581) 0:01:56.169 ******** 2026-03-07 01:01:14.634272 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634280 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:14.634288 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:14.634295 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634303 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:14.634319 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:14.634328 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634335 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:14.634343 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:14.634351 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634358 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:14.634366 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:14.634374 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634382 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:14.634389 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:14.634397 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:14.634405 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:14.634413 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:14.634421 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-07 01:01:14.634429 | orchestrator | 2026-03-07 01:01:14.634436 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:01:14.634444 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-07 01:01:14.634454 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-07 01:01:14.634462 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-07 01:01:14.634470 | orchestrator | 2026-03-07 01:01:14.634478 | orchestrator | 2026-03-07 01:01:14.634486 | orchestrator | 2026-03-07 01:01:14.634494 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:01:14.634508 | orchestrator | Saturday 07 March 2026 01:01:13 +0000 (0:00:17.843) 0:02:14.013 ******** 2026-03-07 01:01:14.634516 | orchestrator | =============================================================================== 2026-03-07 01:01:14.634523 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.82s 2026-03-07 01:01:14.634531 | orchestrator | generate keys ---------------------------------------------------------- 24.70s 2026-03-07 01:01:14.634539 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.84s 2026-03-07 01:01:14.634547 | orchestrator | get keys from monitors ------------------------------------------------- 12.58s 2026-03-07 01:01:14.634555 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.35s 2026-03-07 01:01:14.634562 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.11s 2026-03-07 01:01:14.634570 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.69s 2026-03-07 01:01:14.634578 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.17s 2026-03-07 01:01:14.634586 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.96s 2026-03-07 01:01:14.634594 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.89s 2026-03-07 01:01:14.634601 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.85s 2026-03-07 01:01:14.634609 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.82s 2026-03-07 01:01:14.634618 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.72s 2026-03-07 01:01:14.634631 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.71s 2026-03-07 01:01:14.634643 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.70s 2026-03-07 01:01:14.634655 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.69s 2026-03-07 01:01:14.634667 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.69s 2026-03-07 01:01:14.634679 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2026-03-07 01:01:14.634692 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.67s 2026-03-07 01:01:14.634705 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2026-03-07 01:01:14.634717 | orchestrator | 2026-03-07 01:01:14 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:14.634732 | orchestrator | 2026-03-07 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:17.683227 | orchestrator | 2026-03-07 01:01:17 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:17.684384 | orchestrator | 2026-03-07 01:01:17 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:17.686137 | orchestrator | 2026-03-07 01:01:17 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:17.686204 | orchestrator | 2026-03-07 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:20.739380 | orchestrator | 2026-03-07 01:01:20 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:20.741264 | orchestrator | 2026-03-07 01:01:20 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:20.743459 | orchestrator | 2026-03-07 01:01:20 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:20.743493 | orchestrator | 2026-03-07 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:23.785888 | orchestrator | 2026-03-07 01:01:23 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:23.787061 | orchestrator | 2026-03-07 01:01:23 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:23.788460 | orchestrator | 2026-03-07 01:01:23 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:23.788502 | orchestrator | 2026-03-07 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:26.832364 | orchestrator | 2026-03-07 01:01:26 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:26.833011 | orchestrator | 2026-03-07 01:01:26 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:26.834671 | orchestrator | 2026-03-07 01:01:26 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:26.834717 | orchestrator | 2026-03-07 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:29.882077 | orchestrator | 2026-03-07 01:01:29 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:29.883321 | orchestrator | 2026-03-07 01:01:29 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:29.886543 | orchestrator | 2026-03-07 01:01:29 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:29.886641 | orchestrator | 2026-03-07 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:32.944526 | orchestrator | 2026-03-07 01:01:32 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:32.946400 | orchestrator | 2026-03-07 01:01:32 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:32.948175 | orchestrator | 2026-03-07 01:01:32 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:32.948570 | orchestrator | 2026-03-07 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:36.004407 | orchestrator | 2026-03-07 01:01:35 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:36.009234 | orchestrator | 2026-03-07 01:01:36 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:36.013662 | orchestrator | 2026-03-07 01:01:36 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:36.013978 | orchestrator | 2026-03-07 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:39.056358 | orchestrator | 2026-03-07 01:01:39 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:39.057865 | orchestrator | 2026-03-07 01:01:39 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:39.060616 | orchestrator | 2026-03-07 01:01:39 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:39.060810 | orchestrator | 2026-03-07 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:42.100406 | orchestrator | 2026-03-07 01:01:42 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:42.101326 | orchestrator | 2026-03-07 01:01:42 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:42.102802 | orchestrator | 2026-03-07 01:01:42 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:42.102897 | orchestrator | 2026-03-07 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:45.161161 | orchestrator | 2026-03-07 01:01:45 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:45.162871 | orchestrator | 2026-03-07 01:01:45 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:45.164352 | orchestrator | 2026-03-07 01:01:45 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:45.164384 | orchestrator | 2026-03-07 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:48.204430 | orchestrator | 2026-03-07 01:01:48 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:48.206319 | orchestrator | 2026-03-07 01:01:48 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:48.206783 | orchestrator | 2026-03-07 01:01:48 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:48.206815 | orchestrator | 2026-03-07 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:51.258451 | orchestrator | 2026-03-07 01:01:51 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:51.259561 | orchestrator | 2026-03-07 01:01:51 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:51.260777 | orchestrator | 2026-03-07 01:01:51 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:51.260800 | orchestrator | 2026-03-07 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:54.307042 | orchestrator | 2026-03-07 01:01:54 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:54.308216 | orchestrator | 2026-03-07 01:01:54 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state STARTED 2026-03-07 01:01:54.311294 | orchestrator | 2026-03-07 01:01:54 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state STARTED 2026-03-07 01:01:54.311417 | orchestrator | 2026-03-07 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:57.348563 | orchestrator | 2026-03-07 01:01:57 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:01:57.351024 | orchestrator | 2026-03-07 01:01:57 | INFO  | Task 73f16833-6364-4c09-a21b-6133ec21f593 is in state SUCCESS 2026-03-07 01:01:57.352961 | orchestrator | 2026-03-07 01:01:57.353019 | orchestrator | 2026-03-07 01:01:57.353029 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:01:57.353037 | orchestrator | 2026-03-07 01:01:57.353044 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:01:57.353051 | orchestrator | Saturday 07 March 2026 01:00:06 +0000 (0:00:00.335) 0:00:00.335 ******** 2026-03-07 01:01:57.353059 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.353067 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.353073 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.353079 | orchestrator | 2026-03-07 01:01:57.353086 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:01:57.353093 | orchestrator | Saturday 07 March 2026 01:00:07 +0000 (0:00:00.351) 0:00:00.687 ******** 2026-03-07 01:01:57.353100 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-07 01:01:57.353107 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-07 01:01:57.353114 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-07 01:01:57.353120 | orchestrator | 2026-03-07 01:01:57.353127 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-07 01:01:57.353133 | orchestrator | 2026-03-07 01:01:57.353139 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-07 01:01:57.353146 | orchestrator | Saturday 07 March 2026 01:00:07 +0000 (0:00:00.469) 0:00:01.156 ******** 2026-03-07 01:01:57.353152 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:01:57.353161 | orchestrator | 2026-03-07 01:01:57.353168 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-07 01:01:57.353175 | orchestrator | Saturday 07 March 2026 01:00:08 +0000 (0:00:00.563) 0:00:01.719 ******** 2026-03-07 01:01:57.353228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:01:57.353258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:01:57.353276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:01:57.353284 | orchestrator | 2026-03-07 01:01:57.353293 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-07 01:01:57.353300 | orchestrator | Saturday 07 March 2026 01:00:09 +0000 (0:00:01.071) 0:00:02.791 ******** 2026-03-07 01:01:57.353307 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.353314 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.353323 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.353334 | orchestrator | 2026-03-07 01:01:57.353344 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-07 01:01:57.353355 | orchestrator | Saturday 07 March 2026 01:00:09 +0000 (0:00:00.552) 0:00:03.344 ******** 2026-03-07 01:01:57.353366 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-07 01:01:57.353383 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-07 01:01:57.353390 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-07 01:01:57.353397 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-07 01:01:57.353404 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-07 01:01:57.353410 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-07 01:01:57.353417 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-07 01:01:57.353424 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-07 01:01:57.353436 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-07 01:01:57.353443 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-07 01:01:57.353450 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-07 01:01:57.353456 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-07 01:01:57.353463 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-07 01:01:57.353470 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-07 01:01:57.353477 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-07 01:01:57.353483 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-07 01:01:57.353490 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-07 01:01:57.353497 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-07 01:01:57.353508 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-07 01:01:57.353515 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-07 01:01:57.353523 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-07 01:01:57.353531 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-07 01:01:57.353538 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-07 01:01:57.353546 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-07 01:01:57.353554 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-07 01:01:57.353563 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-07 01:01:57.353571 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-07 01:01:57.353578 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-07 01:01:57.353585 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-07 01:01:57.353592 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-07 01:01:57.353599 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-07 01:01:57.353606 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-07 01:01:57.353613 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-07 01:01:57.353621 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-07 01:01:57.353627 | orchestrator | 2026-03-07 01:01:57.353635 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:01:57.353642 | orchestrator | Saturday 07 March 2026 01:00:10 +0000 (0:00:00.900) 0:00:04.244 ******** 2026-03-07 01:01:57.353654 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.353661 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.353668 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.353675 | orchestrator | 2026-03-07 01:01:57.353682 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:01:57.353689 | orchestrator | Saturday 07 March 2026 01:00:11 +0000 (0:00:00.405) 0:00:04.649 ******** 2026-03-07 01:01:57.353695 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.353700 | orchestrator | 2026-03-07 01:01:57.353708 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:01:57.353713 | orchestrator | Saturday 07 March 2026 01:00:11 +0000 (0:00:00.132) 0:00:04.782 ******** 2026-03-07 01:01:57.353717 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.353722 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.353726 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.353731 | orchestrator | 2026-03-07 01:01:57.353736 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:01:57.353740 | orchestrator | Saturday 07 March 2026 01:00:11 +0000 (0:00:00.543) 0:00:05.326 ******** 2026-03-07 01:01:57.353744 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.353749 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.353754 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.353758 | orchestrator | 2026-03-07 01:01:57.353762 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:01:57.353767 | orchestrator | Saturday 07 March 2026 01:00:12 +0000 (0:00:00.383) 0:00:05.709 ******** 2026-03-07 01:01:57.353771 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.353776 | orchestrator | 2026-03-07 01:01:57.353782 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:01:57.353789 | orchestrator | Saturday 07 March 2026 01:00:12 +0000 (0:00:00.138) 0:00:05.847 ******** 2026-03-07 01:01:57.353794 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.353800 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.353807 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.353813 | orchestrator | 2026-03-07 01:01:57.353819 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:01:57.353825 | orchestrator | Saturday 07 March 2026 01:00:12 +0000 (0:00:00.322) 0:00:06.169 ******** 2026-03-07 01:01:57.353831 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.353837 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.353843 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.353849 | orchestrator | 2026-03-07 01:01:57.353855 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:01:57.353862 | orchestrator | Saturday 07 March 2026 01:00:13 +0000 (0:00:00.338) 0:00:06.508 ******** 2026-03-07 01:01:57.353869 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.353876 | orchestrator | 2026-03-07 01:01:57.353886 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:01:57.353894 | orchestrator | Saturday 07 March 2026 01:00:13 +0000 (0:00:00.380) 0:00:06.888 ******** 2026-03-07 01:01:57.353900 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.353907 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.353912 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.353916 | orchestrator | 2026-03-07 01:01:57.353919 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:01:57.353923 | orchestrator | Saturday 07 March 2026 01:00:13 +0000 (0:00:00.315) 0:00:07.203 ******** 2026-03-07 01:01:57.353927 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.353931 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.353934 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.353938 | orchestrator | 2026-03-07 01:01:57.353982 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:01:57.353987 | orchestrator | Saturday 07 March 2026 01:00:14 +0000 (0:00:00.341) 0:00:07.545 ******** 2026-03-07 01:01:57.353991 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.353999 | orchestrator | 2026-03-07 01:01:57.354002 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:01:57.354006 | orchestrator | Saturday 07 March 2026 01:00:14 +0000 (0:00:00.135) 0:00:07.680 ******** 2026-03-07 01:01:57.354010 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354049 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.354055 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.354059 | orchestrator | 2026-03-07 01:01:57.354063 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:01:57.354067 | orchestrator | Saturday 07 March 2026 01:00:14 +0000 (0:00:00.339) 0:00:08.020 ******** 2026-03-07 01:01:57.354071 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.354074 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.354078 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.354082 | orchestrator | 2026-03-07 01:01:57.354086 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:01:57.354090 | orchestrator | Saturday 07 March 2026 01:00:15 +0000 (0:00:00.599) 0:00:08.619 ******** 2026-03-07 01:01:57.354094 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354097 | orchestrator | 2026-03-07 01:01:57.354101 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:01:57.354105 | orchestrator | Saturday 07 March 2026 01:00:15 +0000 (0:00:00.155) 0:00:08.775 ******** 2026-03-07 01:01:57.354109 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354112 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.354116 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.354120 | orchestrator | 2026-03-07 01:01:57.354124 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:01:57.354128 | orchestrator | Saturday 07 March 2026 01:00:15 +0000 (0:00:00.356) 0:00:09.131 ******** 2026-03-07 01:01:57.354131 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.354135 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.354139 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.354143 | orchestrator | 2026-03-07 01:01:57.354147 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:01:57.354151 | orchestrator | Saturday 07 March 2026 01:00:16 +0000 (0:00:00.411) 0:00:09.543 ******** 2026-03-07 01:01:57.354154 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354158 | orchestrator | 2026-03-07 01:01:57.354162 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:01:57.354166 | orchestrator | Saturday 07 March 2026 01:00:16 +0000 (0:00:00.157) 0:00:09.700 ******** 2026-03-07 01:01:57.354169 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354173 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.354177 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.354180 | orchestrator | 2026-03-07 01:01:57.354184 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:01:57.354194 | orchestrator | Saturday 07 March 2026 01:00:16 +0000 (0:00:00.303) 0:00:10.004 ******** 2026-03-07 01:01:57.354198 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.354202 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.354205 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.354209 | orchestrator | 2026-03-07 01:01:57.354213 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:01:57.354217 | orchestrator | Saturday 07 March 2026 01:00:17 +0000 (0:00:00.625) 0:00:10.629 ******** 2026-03-07 01:01:57.354220 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354224 | orchestrator | 2026-03-07 01:01:57.354228 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:01:57.354232 | orchestrator | Saturday 07 March 2026 01:00:17 +0000 (0:00:00.144) 0:00:10.774 ******** 2026-03-07 01:01:57.354236 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354239 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.354243 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.354247 | orchestrator | 2026-03-07 01:01:57.354255 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:01:57.354259 | orchestrator | Saturday 07 March 2026 01:00:17 +0000 (0:00:00.344) 0:00:11.119 ******** 2026-03-07 01:01:57.354263 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.354266 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.354270 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.354274 | orchestrator | 2026-03-07 01:01:57.354277 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:01:57.354281 | orchestrator | Saturday 07 March 2026 01:00:18 +0000 (0:00:00.344) 0:00:11.463 ******** 2026-03-07 01:01:57.354285 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354289 | orchestrator | 2026-03-07 01:01:57.354295 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:01:57.354301 | orchestrator | Saturday 07 March 2026 01:00:18 +0000 (0:00:00.145) 0:00:11.609 ******** 2026-03-07 01:01:57.354311 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354318 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.354325 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.354331 | orchestrator | 2026-03-07 01:01:57.354338 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:01:57.354344 | orchestrator | Saturday 07 March 2026 01:00:18 +0000 (0:00:00.655) 0:00:12.265 ******** 2026-03-07 01:01:57.354350 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.354360 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.354367 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.354373 | orchestrator | 2026-03-07 01:01:57.354379 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:01:57.354385 | orchestrator | Saturday 07 March 2026 01:00:19 +0000 (0:00:00.368) 0:00:12.633 ******** 2026-03-07 01:01:57.354393 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354399 | orchestrator | 2026-03-07 01:01:57.354406 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:01:57.354412 | orchestrator | Saturday 07 March 2026 01:00:19 +0000 (0:00:00.171) 0:00:12.804 ******** 2026-03-07 01:01:57.354419 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354425 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.354432 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.354438 | orchestrator | 2026-03-07 01:01:57.354445 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:01:57.354452 | orchestrator | Saturday 07 March 2026 01:00:19 +0000 (0:00:00.336) 0:00:13.141 ******** 2026-03-07 01:01:57.354458 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:01:57.354465 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:01:57.354472 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:01:57.354478 | orchestrator | 2026-03-07 01:01:57.354485 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:01:57.354492 | orchestrator | Saturday 07 March 2026 01:00:20 +0000 (0:00:00.358) 0:00:13.500 ******** 2026-03-07 01:01:57.354499 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354506 | orchestrator | 2026-03-07 01:01:57.354510 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:01:57.354514 | orchestrator | Saturday 07 March 2026 01:00:20 +0000 (0:00:00.127) 0:00:13.628 ******** 2026-03-07 01:01:57.354518 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354522 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.354526 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.354530 | orchestrator | 2026-03-07 01:01:57.354534 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-07 01:01:57.354537 | orchestrator | Saturday 07 March 2026 01:00:20 +0000 (0:00:00.593) 0:00:14.221 ******** 2026-03-07 01:01:57.354541 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:01:57.354545 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:01:57.354549 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:01:57.354552 | orchestrator | 2026-03-07 01:01:57.354556 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-07 01:01:57.354565 | orchestrator | Saturday 07 March 2026 01:00:22 +0000 (0:00:01.759) 0:00:15.980 ******** 2026-03-07 01:01:57.354569 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-07 01:01:57.354573 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-07 01:01:57.354577 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-07 01:01:57.354581 | orchestrator | 2026-03-07 01:01:57.354585 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-07 01:01:57.354589 | orchestrator | Saturday 07 March 2026 01:00:24 +0000 (0:00:01.955) 0:00:17.936 ******** 2026-03-07 01:01:57.354593 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-07 01:01:57.354597 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-07 01:01:57.354601 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-07 01:01:57.354605 | orchestrator | 2026-03-07 01:01:57.354609 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-07 01:01:57.354617 | orchestrator | Saturday 07 March 2026 01:00:27 +0000 (0:00:02.591) 0:00:20.527 ******** 2026-03-07 01:01:57.354621 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-07 01:01:57.354625 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-07 01:01:57.354629 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-07 01:01:57.354632 | orchestrator | 2026-03-07 01:01:57.354636 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-07 01:01:57.354640 | orchestrator | Saturday 07 March 2026 01:00:29 +0000 (0:00:02.191) 0:00:22.719 ******** 2026-03-07 01:01:57.354643 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354647 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.354651 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.354655 | orchestrator | 2026-03-07 01:01:57.354659 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-07 01:01:57.354665 | orchestrator | Saturday 07 March 2026 01:00:29 +0000 (0:00:00.496) 0:00:23.216 ******** 2026-03-07 01:01:57.354673 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354681 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.354687 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.354692 | orchestrator | 2026-03-07 01:01:57.354699 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-07 01:01:57.354705 | orchestrator | Saturday 07 March 2026 01:00:30 +0000 (0:00:00.321) 0:00:23.537 ******** 2026-03-07 01:01:57.354711 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:01:57.354717 | orchestrator | 2026-03-07 01:01:57.354723 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-07 01:01:57.354729 | orchestrator | Saturday 07 March 2026 01:00:31 +0000 (0:00:00.952) 0:00:24.490 ******** 2026-03-07 01:01:57.354742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:01:57.354774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:01:57.354782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:01:57.354794 | orchestrator | 2026-03-07 01:01:57.354801 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-07 01:01:57.354807 | orchestrator | Saturday 07 March 2026 01:00:32 +0000 (0:00:01.631) 0:00:26.122 ******** 2026-03-07 01:01:57.354823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:01:57.354836 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:01:57.354856 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.354867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:01:57.354879 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.354886 | orchestrator | 2026-03-07 01:01:57.354894 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-07 01:01:57.354900 | orchestrator | Saturday 07 March 2026 01:00:33 +0000 (0:00:00.684) 0:00:26.806 ******** 2026-03-07 01:01:57.354913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:01:57.354920 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.354931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:01:57.354966 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.354978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:01:57.354985 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.354992 | orchestrator | 2026-03-07 01:01:57.354998 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-07 01:01:57.355003 | orchestrator | Saturday 07 March 2026 01:00:34 +0000 (0:00:00.892) 0:00:27.698 ******** 2026-03-07 01:01:57.355014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:01:57.355030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:01:57.355041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:01:57.355052 | orchestrator | 2026-03-07 01:01:57.355058 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-07 01:01:57.355063 | orchestrator | Saturday 07 March 2026 01:00:36 +0000 (0:00:01.755) 0:00:29.454 ******** 2026-03-07 01:01:57.355069 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:01:57.355075 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:01:57.355081 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:01:57.355087 | orchestrator | 2026-03-07 01:01:57.355093 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-07 01:01:57.355099 | orchestrator | Saturday 07 March 2026 01:00:36 +0000 (0:00:00.386) 0:00:29.841 ******** 2026-03-07 01:01:57.355105 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:01:57.355111 | orchestrator | 2026-03-07 01:01:57.355118 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-07 01:01:57.355130 | orchestrator | Saturday 07 March 2026 01:00:37 +0000 (0:00:00.711) 0:00:30.553 ******** 2026-03-07 01:01:57.355138 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:01:57.355145 | orchestrator | 2026-03-07 01:01:57.355152 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-07 01:01:57.355159 | orchestrator | Saturday 07 March 2026 01:00:39 +0000 (0:00:02.581) 0:00:33.134 ******** 2026-03-07 01:01:57.355167 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:01:57.355174 | orchestrator | 2026-03-07 01:01:57.355181 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-07 01:01:57.355187 | orchestrator | Saturday 07 March 2026 01:00:42 +0000 (0:00:02.794) 0:00:35.929 ******** 2026-03-07 01:01:57.355193 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:01:57.355200 | orchestrator | 2026-03-07 01:01:57.355207 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-07 01:01:57.355213 | orchestrator | Saturday 07 March 2026 01:00:59 +0000 (0:00:17.097) 0:00:53.027 ******** 2026-03-07 01:01:57.355224 | orchestrator | 2026-03-07 01:01:57.355230 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-07 01:01:57.355237 | orchestrator | Saturday 07 March 2026 01:00:59 +0000 (0:00:00.063) 0:00:53.090 ******** 2026-03-07 01:01:57.355243 | orchestrator | 2026-03-07 01:01:57.355249 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-07 01:01:57.355256 | orchestrator | Saturday 07 March 2026 01:00:59 +0000 (0:00:00.072) 0:00:53.163 ******** 2026-03-07 01:01:57.355263 | orchestrator | 2026-03-07 01:01:57.355270 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-07 01:01:57.355277 | orchestrator | Saturday 07 March 2026 01:00:59 +0000 (0:00:00.075) 0:00:53.238 ******** 2026-03-07 01:01:57.355283 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:01:57.355289 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:01:57.355296 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:01:57.355303 | orchestrator | 2026-03-07 01:01:57.355309 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:01:57.355316 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-07 01:01:57.355327 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-07 01:01:57.355333 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-07 01:01:57.355340 | orchestrator | 2026-03-07 01:01:57.355346 | orchestrator | 2026-03-07 01:01:57.355352 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:01:57.355358 | orchestrator | Saturday 07 March 2026 01:01:56 +0000 (0:00:56.250) 0:01:49.489 ******** 2026-03-07 01:01:57.355364 | orchestrator | =============================================================================== 2026-03-07 01:01:57.355371 | orchestrator | horizon : Restart horizon container ------------------------------------ 56.25s 2026-03-07 01:01:57.355377 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.10s 2026-03-07 01:01:57.355383 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.79s 2026-03-07 01:01:57.355390 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.59s 2026-03-07 01:01:57.355394 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.58s 2026-03-07 01:01:57.355397 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.19s 2026-03-07 01:01:57.355401 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.96s 2026-03-07 01:01:57.355405 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.76s 2026-03-07 01:01:57.355409 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.76s 2026-03-07 01:01:57.355413 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.63s 2026-03-07 01:01:57.355417 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.07s 2026-03-07 01:01:57.355420 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.95s 2026-03-07 01:01:57.355424 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.90s 2026-03-07 01:01:57.355427 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.89s 2026-03-07 01:01:57.355431 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2026-03-07 01:01:57.355435 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.68s 2026-03-07 01:01:57.355439 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.66s 2026-03-07 01:01:57.355443 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2026-03-07 01:01:57.355446 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2026-03-07 01:01:57.355454 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.59s 2026-03-07 01:01:57.355458 | orchestrator | 2026-03-07 01:01:57 | INFO  | Task 2b82a2f7-ab18-4cd1-927c-1eb8e9e8946b is in state SUCCESS 2026-03-07 01:01:57.355462 | orchestrator | 2026-03-07 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:00.399331 | orchestrator | 2026-03-07 01:02:00 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:00.400591 | orchestrator | 2026-03-07 01:02:00 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:00.400663 | orchestrator | 2026-03-07 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:03.439888 | orchestrator | 2026-03-07 01:02:03 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:03.441639 | orchestrator | 2026-03-07 01:02:03 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:03.441924 | orchestrator | 2026-03-07 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:06.487801 | orchestrator | 2026-03-07 01:02:06 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:06.488403 | orchestrator | 2026-03-07 01:02:06 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:06.488442 | orchestrator | 2026-03-07 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:09.534280 | orchestrator | 2026-03-07 01:02:09 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:09.536899 | orchestrator | 2026-03-07 01:02:09 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:09.537011 | orchestrator | 2026-03-07 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:12.607792 | orchestrator | 2026-03-07 01:02:12 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:12.609670 | orchestrator | 2026-03-07 01:02:12 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:12.609723 | orchestrator | 2026-03-07 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:15.654227 | orchestrator | 2026-03-07 01:02:15 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:15.657743 | orchestrator | 2026-03-07 01:02:15 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:15.657826 | orchestrator | 2026-03-07 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:18.702854 | orchestrator | 2026-03-07 01:02:18 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:18.704549 | orchestrator | 2026-03-07 01:02:18 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:18.704607 | orchestrator | 2026-03-07 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:21.750692 | orchestrator | 2026-03-07 01:02:21 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:21.752664 | orchestrator | 2026-03-07 01:02:21 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:21.752745 | orchestrator | 2026-03-07 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:24.794499 | orchestrator | 2026-03-07 01:02:24 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:24.797556 | orchestrator | 2026-03-07 01:02:24 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:24.797645 | orchestrator | 2026-03-07 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:27.844393 | orchestrator | 2026-03-07 01:02:27 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:27.846274 | orchestrator | 2026-03-07 01:02:27 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:27.846565 | orchestrator | 2026-03-07 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:30.900550 | orchestrator | 2026-03-07 01:02:30 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:30.903615 | orchestrator | 2026-03-07 01:02:30 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:30.903676 | orchestrator | 2026-03-07 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:33.957865 | orchestrator | 2026-03-07 01:02:33 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:33.959800 | orchestrator | 2026-03-07 01:02:33 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:33.959876 | orchestrator | 2026-03-07 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:37.021088 | orchestrator | 2026-03-07 01:02:37 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:37.024597 | orchestrator | 2026-03-07 01:02:37 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:37.024668 | orchestrator | 2026-03-07 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:40.067726 | orchestrator | 2026-03-07 01:02:40 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:40.069095 | orchestrator | 2026-03-07 01:02:40 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:40.069134 | orchestrator | 2026-03-07 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:43.114363 | orchestrator | 2026-03-07 01:02:43 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:43.116290 | orchestrator | 2026-03-07 01:02:43 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:43.116351 | orchestrator | 2026-03-07 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:46.172826 | orchestrator | 2026-03-07 01:02:46 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:46.176026 | orchestrator | 2026-03-07 01:02:46 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:46.176108 | orchestrator | 2026-03-07 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:49.213357 | orchestrator | 2026-03-07 01:02:49 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:49.214458 | orchestrator | 2026-03-07 01:02:49 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:49.214523 | orchestrator | 2026-03-07 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:52.276893 | orchestrator | 2026-03-07 01:02:52 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:52.277572 | orchestrator | 2026-03-07 01:02:52 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:52.277618 | orchestrator | 2026-03-07 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:55.335384 | orchestrator | 2026-03-07 01:02:55 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:55.336733 | orchestrator | 2026-03-07 01:02:55 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:55.336771 | orchestrator | 2026-03-07 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:58.396640 | orchestrator | 2026-03-07 01:02:58 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:02:58.399902 | orchestrator | 2026-03-07 01:02:58 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state STARTED 2026-03-07 01:02:58.400055 | orchestrator | 2026-03-07 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:01.457109 | orchestrator | 2026-03-07 01:03:01 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state STARTED 2026-03-07 01:03:01.460148 | orchestrator | 2026-03-07 01:03:01 | INFO  | Task 672d182c-1f80-483b-acc6-95d6e3a0ef6b is in state SUCCESS 2026-03-07 01:03:01.460500 | orchestrator | 2026-03-07 01:03:01.460539 | orchestrator | 2026-03-07 01:03:01.460550 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-07 01:03:01.460560 | orchestrator | 2026-03-07 01:03:01.460568 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-07 01:03:01.460577 | orchestrator | Saturday 07 March 2026 01:01:19 +0000 (0:00:00.182) 0:00:00.182 ******** 2026-03-07 01:03:01.460585 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-07 01:03:01.460596 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.460610 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.460623 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:03:01.460635 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.460648 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-07 01:03:01.460661 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-07 01:03:01.460674 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-07 01:03:01.460688 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-07 01:03:01.460701 | orchestrator | 2026-03-07 01:03:01.460714 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-07 01:03:01.460727 | orchestrator | Saturday 07 March 2026 01:01:23 +0000 (0:00:04.618) 0:00:04.800 ******** 2026-03-07 01:03:01.460735 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-07 01:03:01.460743 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.460751 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.460760 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:03:01.460768 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.460776 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-07 01:03:01.460784 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-07 01:03:01.460791 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-07 01:03:01.460799 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-07 01:03:01.460832 | orchestrator | 2026-03-07 01:03:01.460840 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-07 01:03:01.460869 | orchestrator | Saturday 07 March 2026 01:01:28 +0000 (0:00:04.475) 0:00:09.275 ******** 2026-03-07 01:03:01.460879 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-07 01:03:01.460888 | orchestrator | 2026-03-07 01:03:01.460896 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-07 01:03:01.460903 | orchestrator | Saturday 07 March 2026 01:01:29 +0000 (0:00:01.290) 0:00:10.566 ******** 2026-03-07 01:03:01.460911 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-07 01:03:01.460919 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.460927 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.460948 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:03:01.460956 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.460964 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-07 01:03:01.460996 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-07 01:03:01.461006 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-07 01:03:01.461014 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-07 01:03:01.461021 | orchestrator | 2026-03-07 01:03:01.461029 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-07 01:03:01.461037 | orchestrator | Saturday 07 March 2026 01:01:45 +0000 (0:00:15.988) 0:00:26.554 ******** 2026-03-07 01:03:01.461045 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-07 01:03:01.461053 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-07 01:03:01.461062 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-07 01:03:01.461069 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-07 01:03:01.461090 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-07 01:03:01.461099 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-07 01:03:01.461109 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-07 01:03:01.461118 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-07 01:03:01.461128 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-07 01:03:01.461137 | orchestrator | 2026-03-07 01:03:01.461147 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-07 01:03:01.461157 | orchestrator | Saturday 07 March 2026 01:01:48 +0000 (0:00:03.307) 0:00:29.862 ******** 2026-03-07 01:03:01.461168 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-07 01:03:01.461179 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.461188 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.461198 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:03:01.461208 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-07 01:03:01.461218 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-07 01:03:01.461227 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-07 01:03:01.461244 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-07 01:03:01.461254 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-07 01:03:01.461263 | orchestrator | 2026-03-07 01:03:01.461273 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:03:01.461283 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:03:01.461294 | orchestrator | 2026-03-07 01:03:01.461304 | orchestrator | 2026-03-07 01:03:01.461314 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:03:01.461323 | orchestrator | Saturday 07 March 2026 01:01:56 +0000 (0:00:07.406) 0:00:37.269 ******** 2026-03-07 01:03:01.461333 | orchestrator | =============================================================================== 2026-03-07 01:03:01.461343 | orchestrator | Write ceph keys to the share directory --------------------------------- 15.99s 2026-03-07 01:03:01.461352 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.41s 2026-03-07 01:03:01.461362 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.62s 2026-03-07 01:03:01.461371 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.48s 2026-03-07 01:03:01.461382 | orchestrator | Check if target directories exist --------------------------------------- 3.31s 2026-03-07 01:03:01.461391 | orchestrator | Create share directory -------------------------------------------------- 1.29s 2026-03-07 01:03:01.461401 | orchestrator | 2026-03-07 01:03:01.461410 | orchestrator | 2026-03-07 01:03:01.461420 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-07 01:03:01.461430 | orchestrator | 2026-03-07 01:03:01.461440 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-07 01:03:01.461450 | orchestrator | Saturday 07 March 2026 01:02:02 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-03-07 01:03:01.461460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-07 01:03:01.461471 | orchestrator | 2026-03-07 01:03:01.461480 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-07 01:03:01.461487 | orchestrator | Saturday 07 March 2026 01:02:02 +0000 (0:00:00.273) 0:00:00.556 ******** 2026-03-07 01:03:01.461495 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-07 01:03:01.461503 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-07 01:03:01.461516 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-07 01:03:01.461524 | orchestrator | 2026-03-07 01:03:01.461532 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-07 01:03:01.461539 | orchestrator | Saturday 07 March 2026 01:02:04 +0000 (0:00:01.441) 0:00:01.997 ******** 2026-03-07 01:03:01.461547 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-07 01:03:01.461555 | orchestrator | 2026-03-07 01:03:01.461563 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-07 01:03:01.461571 | orchestrator | Saturday 07 March 2026 01:02:05 +0000 (0:00:01.610) 0:00:03.608 ******** 2026-03-07 01:03:01.461579 | orchestrator | changed: [testbed-manager] 2026-03-07 01:03:01.461587 | orchestrator | 2026-03-07 01:03:01.461596 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-07 01:03:01.461610 | orchestrator | Saturday 07 March 2026 01:02:06 +0000 (0:00:00.969) 0:00:04.578 ******** 2026-03-07 01:03:01.461623 | orchestrator | changed: [testbed-manager] 2026-03-07 01:03:01.461636 | orchestrator | 2026-03-07 01:03:01.461649 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-07 01:03:01.461662 | orchestrator | Saturday 07 March 2026 01:02:07 +0000 (0:00:00.890) 0:00:05.468 ******** 2026-03-07 01:03:01.461676 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-07 01:03:01.461699 | orchestrator | ok: [testbed-manager] 2026-03-07 01:03:01.461712 | orchestrator | 2026-03-07 01:03:01.461727 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-07 01:03:01.461742 | orchestrator | Saturday 07 March 2026 01:02:49 +0000 (0:00:42.056) 0:00:47.525 ******** 2026-03-07 01:03:01.461750 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-07 01:03:01.461759 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-07 01:03:01.461767 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-07 01:03:01.461775 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-07 01:03:01.461782 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-07 01:03:01.461790 | orchestrator | 2026-03-07 01:03:01.461798 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-07 01:03:01.461806 | orchestrator | Saturday 07 March 2026 01:02:54 +0000 (0:00:04.525) 0:00:52.051 ******** 2026-03-07 01:03:01.461814 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-07 01:03:01.461822 | orchestrator | 2026-03-07 01:03:01.461829 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-07 01:03:01.461837 | orchestrator | Saturday 07 March 2026 01:02:54 +0000 (0:00:00.503) 0:00:52.555 ******** 2026-03-07 01:03:01.461845 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:03:01.461853 | orchestrator | 2026-03-07 01:03:01.461861 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-07 01:03:01.461869 | orchestrator | Saturday 07 March 2026 01:02:54 +0000 (0:00:00.138) 0:00:52.693 ******** 2026-03-07 01:03:01.461876 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:03:01.461884 | orchestrator | 2026-03-07 01:03:01.461892 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-07 01:03:01.461900 | orchestrator | Saturday 07 March 2026 01:02:55 +0000 (0:00:00.587) 0:00:53.281 ******** 2026-03-07 01:03:01.461908 | orchestrator | changed: [testbed-manager] 2026-03-07 01:03:01.461915 | orchestrator | 2026-03-07 01:03:01.461923 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-07 01:03:01.461931 | orchestrator | Saturday 07 March 2026 01:02:56 +0000 (0:00:01.540) 0:00:54.821 ******** 2026-03-07 01:03:01.461939 | orchestrator | changed: [testbed-manager] 2026-03-07 01:03:01.461947 | orchestrator | 2026-03-07 01:03:01.461955 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-07 01:03:01.461963 | orchestrator | Saturday 07 March 2026 01:02:57 +0000 (0:00:00.843) 0:00:55.665 ******** 2026-03-07 01:03:01.461970 | orchestrator | changed: [testbed-manager] 2026-03-07 01:03:01.462182 | orchestrator | 2026-03-07 01:03:01.462192 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-07 01:03:01.462201 | orchestrator | Saturday 07 March 2026 01:02:58 +0000 (0:00:00.658) 0:00:56.324 ******** 2026-03-07 01:03:01.462209 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-07 01:03:01.462217 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-07 01:03:01.462225 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-07 01:03:01.462233 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-07 01:03:01.462241 | orchestrator | 2026-03-07 01:03:01.462249 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:03:01.462257 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 01:03:01.462265 | orchestrator | 2026-03-07 01:03:01.462273 | orchestrator | 2026-03-07 01:03:01.462281 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:03:01.462289 | orchestrator | Saturday 07 March 2026 01:03:00 +0000 (0:00:01.688) 0:00:58.012 ******** 2026-03-07 01:03:01.462297 | orchestrator | =============================================================================== 2026-03-07 01:03:01.462305 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.06s 2026-03-07 01:03:01.462313 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.53s 2026-03-07 01:03:01.462330 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.69s 2026-03-07 01:03:01.462338 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.61s 2026-03-07 01:03:01.462346 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.54s 2026-03-07 01:03:01.462354 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.44s 2026-03-07 01:03:01.462367 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2026-03-07 01:03:01.462375 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.89s 2026-03-07 01:03:01.462383 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.84s 2026-03-07 01:03:01.462391 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.66s 2026-03-07 01:03:01.462399 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.59s 2026-03-07 01:03:01.462407 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-03-07 01:03:01.462414 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.27s 2026-03-07 01:03:01.462422 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-03-07 01:03:01.462430 | orchestrator | 2026-03-07 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:04.503111 | orchestrator | 2026-03-07 01:03:04 | INFO  | Task d035aea1-5786-4288-9182-85023a27b732 is in state STARTED 2026-03-07 01:03:04.504586 | orchestrator | 2026-03-07 01:03:04 | INFO  | Task caa551a3-2c7b-45b5-8b35-8055f51c9eea is in state SUCCESS 2026-03-07 01:03:04.506907 | orchestrator | 2026-03-07 01:03:04.507072 | orchestrator | 2026-03-07 01:03:04.507095 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:03:04.507109 | orchestrator | 2026-03-07 01:03:04.507122 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:03:04.507133 | orchestrator | Saturday 07 March 2026 01:00:07 +0000 (0:00:00.343) 0:00:00.343 ******** 2026-03-07 01:03:04.507145 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:04.507158 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:04.507169 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:04.507180 | orchestrator | 2026-03-07 01:03:04.507191 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:03:04.507203 | orchestrator | Saturday 07 March 2026 01:00:07 +0000 (0:00:00.306) 0:00:00.649 ******** 2026-03-07 01:03:04.507214 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-07 01:03:04.507225 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-07 01:03:04.507236 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-07 01:03:04.507247 | orchestrator | 2026-03-07 01:03:04.507258 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-07 01:03:04.507269 | orchestrator | 2026-03-07 01:03:04.507281 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:04.507292 | orchestrator | Saturday 07 March 2026 01:00:07 +0000 (0:00:00.448) 0:00:01.098 ******** 2026-03-07 01:03:04.507303 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:03:04.507315 | orchestrator | 2026-03-07 01:03:04.507326 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-07 01:03:04.507337 | orchestrator | Saturday 07 March 2026 01:00:08 +0000 (0:00:00.621) 0:00:01.719 ******** 2026-03-07 01:03:04.507355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.507415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.507506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.507536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.507558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.507577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.507612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.507640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.507661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.507683 | orchestrator | 2026-03-07 01:03:04.507704 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-07 01:03:04.507724 | orchestrator | Saturday 07 March 2026 01:00:10 +0000 (0:00:01.857) 0:00:03.577 ******** 2026-03-07 01:03:04.507743 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.507763 | orchestrator | 2026-03-07 01:03:04.507795 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-07 01:03:04.507815 | orchestrator | Saturday 07 March 2026 01:00:10 +0000 (0:00:00.157) 0:00:03.734 ******** 2026-03-07 01:03:04.507834 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.507853 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.507870 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.507889 | orchestrator | 2026-03-07 01:03:04.507908 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-07 01:03:04.507927 | orchestrator | Saturday 07 March 2026 01:00:11 +0000 (0:00:00.596) 0:00:04.331 ******** 2026-03-07 01:03:04.507947 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:03:04.507968 | orchestrator | 2026-03-07 01:03:04.508016 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:04.508037 | orchestrator | Saturday 07 March 2026 01:00:12 +0000 (0:00:01.016) 0:00:05.348 ******** 2026-03-07 01:03:04.508052 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:03:04.508063 | orchestrator | 2026-03-07 01:03:04.508075 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-07 01:03:04.508097 | orchestrator | Saturday 07 March 2026 01:00:12 +0000 (0:00:00.609) 0:00:05.958 ******** 2026-03-07 01:03:04.508110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.508124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.508148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.508187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508326 | orchestrator | 2026-03-07 01:03:04.508338 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-07 01:03:04.508350 | orchestrator | Saturday 07 March 2026 01:00:16 +0000 (0:00:03.646) 0:00:09.604 ******** 2026-03-07 01:03:04.508372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:04.508392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.508404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:04.508415 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.508427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:04.508444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.508457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:04.508468 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.508488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:04.508507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.508519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:04.508530 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.508541 | orchestrator | 2026-03-07 01:03:04.508553 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-07 01:03:04.508564 | orchestrator | Saturday 07 March 2026 01:00:17 +0000 (0:00:00.680) 0:00:10.284 ******** 2026-03-07 01:03:04.508581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:04.508594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.508619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:04.508631 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.508644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:04.508656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.508667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:04.508679 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.508696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:04.508729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.508742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:04.508753 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.508765 | orchestrator | 2026-03-07 01:03:04.508776 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-07 01:03:04.508787 | orchestrator | Saturday 07 March 2026 01:00:17 +0000 (0:00:00.853) 0:00:11.138 ******** 2026-03-07 01:03:04.508799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.508817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.508837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.508856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.508936 | orchestrator | 2026-03-07 01:03:04.508948 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-07 01:03:04.508959 | orchestrator | Saturday 07 March 2026 01:00:21 +0000 (0:00:03.438) 0:00:14.576 ******** 2026-03-07 01:03:04.509045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.509062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.509075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.509086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.509118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.509132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.509143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.509155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.509167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.509178 | orchestrator | 2026-03-07 01:03:04.509189 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-07 01:03:04.509200 | orchestrator | Saturday 07 March 2026 01:00:27 +0000 (0:00:06.140) 0:00:20.716 ******** 2026-03-07 01:03:04.509212 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:04.509223 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:03:04.509234 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:03:04.509245 | orchestrator | 2026-03-07 01:03:04.509263 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-07 01:03:04.509274 | orchestrator | Saturday 07 March 2026 01:00:29 +0000 (0:00:01.742) 0:00:22.459 ******** 2026-03-07 01:03:04.509284 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.509295 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.509306 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.509317 | orchestrator | 2026-03-07 01:03:04.509328 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-07 01:03:04.509343 | orchestrator | Saturday 07 March 2026 01:00:29 +0000 (0:00:00.628) 0:00:23.088 ******** 2026-03-07 01:03:04.509355 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.509366 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.509377 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.509388 | orchestrator | 2026-03-07 01:03:04.509398 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-07 01:03:04.509409 | orchestrator | Saturday 07 March 2026 01:00:30 +0000 (0:00:00.363) 0:00:23.452 ******** 2026-03-07 01:03:04.509420 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.509431 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.509441 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.509451 | orchestrator | 2026-03-07 01:03:04.509461 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-07 01:03:04.509470 | orchestrator | Saturday 07 March 2026 01:00:30 +0000 (0:00:00.581) 0:00:24.033 ******** 2026-03-07 01:03:04.509489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:04.509500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.509510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:04.509520 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.509531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:04.509552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.509569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:04.509579 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.509590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:04.509601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:04.509611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:04.509627 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.509637 | orchestrator | 2026-03-07 01:03:04.509648 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:04.509658 | orchestrator | Saturday 07 March 2026 01:00:31 +0000 (0:00:00.827) 0:00:24.861 ******** 2026-03-07 01:03:04.509667 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.509677 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.509687 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.509696 | orchestrator | 2026-03-07 01:03:04.509706 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-07 01:03:04.509715 | orchestrator | Saturday 07 March 2026 01:00:32 +0000 (0:00:00.342) 0:00:25.204 ******** 2026-03-07 01:03:04.509725 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-07 01:03:04.509736 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-07 01:03:04.509745 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-07 01:03:04.509755 | orchestrator | 2026-03-07 01:03:04.509769 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-07 01:03:04.509779 | orchestrator | Saturday 07 March 2026 01:00:33 +0000 (0:00:01.620) 0:00:26.825 ******** 2026-03-07 01:03:04.509789 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:03:04.509798 | orchestrator | 2026-03-07 01:03:04.509808 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-07 01:03:04.509818 | orchestrator | Saturday 07 March 2026 01:00:34 +0000 (0:00:01.097) 0:00:27.923 ******** 2026-03-07 01:03:04.509827 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.509837 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.509847 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.509856 | orchestrator | 2026-03-07 01:03:04.509866 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-07 01:03:04.509875 | orchestrator | Saturday 07 March 2026 01:00:35 +0000 (0:00:01.010) 0:00:28.933 ******** 2026-03-07 01:03:04.509885 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:03:04.509894 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-07 01:03:04.509904 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-07 01:03:04.509914 | orchestrator | 2026-03-07 01:03:04.509924 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-07 01:03:04.509940 | orchestrator | Saturday 07 March 2026 01:00:37 +0000 (0:00:01.495) 0:00:30.428 ******** 2026-03-07 01:03:04.509950 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:04.509959 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:04.509969 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:04.509998 | orchestrator | 2026-03-07 01:03:04.510008 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-07 01:03:04.510072 | orchestrator | Saturday 07 March 2026 01:00:37 +0000 (0:00:00.337) 0:00:30.766 ******** 2026-03-07 01:03:04.510086 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-07 01:03:04.510096 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-07 01:03:04.510105 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-07 01:03:04.510115 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-07 01:03:04.510132 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-07 01:03:04.510142 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-07 01:03:04.510152 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-07 01:03:04.510162 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-07 01:03:04.510172 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-07 01:03:04.510182 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-07 01:03:04.510192 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-07 01:03:04.510202 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-07 01:03:04.510212 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-07 01:03:04.510222 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-07 01:03:04.510231 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-07 01:03:04.510241 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:03:04.510251 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:03:04.510261 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:03:04.510271 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:03:04.510281 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:03:04.510291 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:03:04.510300 | orchestrator | 2026-03-07 01:03:04.510310 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-07 01:03:04.510320 | orchestrator | Saturday 07 March 2026 01:00:46 +0000 (0:00:09.216) 0:00:39.983 ******** 2026-03-07 01:03:04.510330 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:03:04.510340 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:03:04.510349 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:03:04.510359 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:03:04.510369 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:03:04.510379 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:03:04.510389 | orchestrator | 2026-03-07 01:03:04.510399 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-07 01:03:04.510414 | orchestrator | Saturday 07 March 2026 01:00:49 +0000 (0:00:03.094) 0:00:43.077 ******** 2026-03-07 01:03:04.510433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.510455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.510466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:04.510477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.510492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.510503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:04.510525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.510536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.510546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:04.510556 | orchestrator | 2026-03-07 01:03:04.510566 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:04.510576 | orchestrator | Saturday 07 March 2026 01:00:52 +0000 (0:00:02.564) 0:00:45.641 ******** 2026-03-07 01:03:04.510586 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.510596 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.510607 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.510616 | orchestrator | 2026-03-07 01:03:04.510626 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-07 01:03:04.510636 | orchestrator | Saturday 07 March 2026 01:00:52 +0000 (0:00:00.349) 0:00:45.991 ******** 2026-03-07 01:03:04.510646 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:04.510655 | orchestrator | 2026-03-07 01:03:04.510665 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-07 01:03:04.510674 | orchestrator | Saturday 07 March 2026 01:00:55 +0000 (0:00:02.493) 0:00:48.485 ******** 2026-03-07 01:03:04.510684 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:04.510694 | orchestrator | 2026-03-07 01:03:04.510704 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-07 01:03:04.510713 | orchestrator | Saturday 07 March 2026 01:00:57 +0000 (0:00:02.289) 0:00:50.775 ******** 2026-03-07 01:03:04.510723 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:04.510733 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:04.510742 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:04.510752 | orchestrator | 2026-03-07 01:03:04.510762 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-07 01:03:04.510771 | orchestrator | Saturday 07 March 2026 01:00:58 +0000 (0:00:00.937) 0:00:51.713 ******** 2026-03-07 01:03:04.510782 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:04.510792 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:04.510808 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:04.510817 | orchestrator | 2026-03-07 01:03:04.510828 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-07 01:03:04.510838 | orchestrator | Saturday 07 March 2026 01:00:58 +0000 (0:00:00.354) 0:00:52.068 ******** 2026-03-07 01:03:04.510847 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.510861 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.510872 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.510882 | orchestrator | 2026-03-07 01:03:04.510892 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-07 01:03:04.510902 | orchestrator | Saturday 07 March 2026 01:00:59 +0000 (0:00:00.342) 0:00:52.410 ******** 2026-03-07 01:03:04.510912 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:04.510921 | orchestrator | 2026-03-07 01:03:04.510931 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-07 01:03:04.510941 | orchestrator | Saturday 07 March 2026 01:01:14 +0000 (0:00:15.048) 0:01:07.459 ******** 2026-03-07 01:03:04.510951 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:04.510961 | orchestrator | 2026-03-07 01:03:04.510970 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-07 01:03:04.511032 | orchestrator | Saturday 07 March 2026 01:01:25 +0000 (0:00:11.108) 0:01:18.567 ******** 2026-03-07 01:03:04.511043 | orchestrator | 2026-03-07 01:03:04.511053 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-07 01:03:04.511063 | orchestrator | Saturday 07 March 2026 01:01:25 +0000 (0:00:00.079) 0:01:18.646 ******** 2026-03-07 01:03:04.511073 | orchestrator | 2026-03-07 01:03:04.511083 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-07 01:03:04.511099 | orchestrator | Saturday 07 March 2026 01:01:25 +0000 (0:00:00.070) 0:01:18.717 ******** 2026-03-07 01:03:04.511109 | orchestrator | 2026-03-07 01:03:04.511119 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-07 01:03:04.511129 | orchestrator | Saturday 07 March 2026 01:01:25 +0000 (0:00:00.068) 0:01:18.786 ******** 2026-03-07 01:03:04.511139 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:04.511149 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:03:04.511159 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:03:04.511169 | orchestrator | 2026-03-07 01:03:04.511178 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-07 01:03:04.511186 | orchestrator | Saturday 07 March 2026 01:01:51 +0000 (0:00:25.999) 0:01:44.785 ******** 2026-03-07 01:03:04.511193 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:03:04.511202 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:03:04.511210 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:04.511218 | orchestrator | 2026-03-07 01:03:04.511225 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-07 01:03:04.511234 | orchestrator | Saturday 07 March 2026 01:01:59 +0000 (0:00:07.978) 0:01:52.764 ******** 2026-03-07 01:03:04.511242 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:03:04.511250 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:04.511258 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:03:04.511265 | orchestrator | 2026-03-07 01:03:04.511273 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:04.511287 | orchestrator | Saturday 07 March 2026 01:02:11 +0000 (0:00:11.818) 0:02:04.583 ******** 2026-03-07 01:03:04.511300 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:03:04.511314 | orchestrator | 2026-03-07 01:03:04.511327 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-07 01:03:04.511341 | orchestrator | Saturday 07 March 2026 01:02:12 +0000 (0:00:00.857) 0:02:05.440 ******** 2026-03-07 01:03:04.511353 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:04.511367 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:04.511382 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:04.511406 | orchestrator | 2026-03-07 01:03:04.511421 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-07 01:03:04.511437 | orchestrator | Saturday 07 March 2026 01:02:13 +0000 (0:00:00.908) 0:02:06.348 ******** 2026-03-07 01:03:04.511452 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:04.511466 | orchestrator | 2026-03-07 01:03:04.511480 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-07 01:03:04.511495 | orchestrator | Saturday 07 March 2026 01:02:14 +0000 (0:00:01.793) 0:02:08.142 ******** 2026-03-07 01:03:04.511511 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-07 01:03:04.511525 | orchestrator | 2026-03-07 01:03:04.511541 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-07 01:03:04.511557 | orchestrator | Saturday 07 March 2026 01:02:26 +0000 (0:00:11.918) 0:02:20.060 ******** 2026-03-07 01:03:04.511573 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-07 01:03:04.511584 | orchestrator | 2026-03-07 01:03:04.511592 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-07 01:03:04.511600 | orchestrator | Saturday 07 March 2026 01:02:51 +0000 (0:00:24.912) 0:02:44.972 ******** 2026-03-07 01:03:04.511608 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-07 01:03:04.511617 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-07 01:03:04.511624 | orchestrator | 2026-03-07 01:03:04.511632 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-07 01:03:04.511640 | orchestrator | Saturday 07 March 2026 01:02:58 +0000 (0:00:06.829) 0:02:51.802 ******** 2026-03-07 01:03:04.511648 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.511656 | orchestrator | 2026-03-07 01:03:04.511664 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-07 01:03:04.511672 | orchestrator | Saturday 07 March 2026 01:02:58 +0000 (0:00:00.144) 0:02:51.946 ******** 2026-03-07 01:03:04.511680 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.511688 | orchestrator | 2026-03-07 01:03:04.511696 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-07 01:03:04.511704 | orchestrator | Saturday 07 March 2026 01:02:58 +0000 (0:00:00.127) 0:02:52.073 ******** 2026-03-07 01:03:04.511712 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.511720 | orchestrator | 2026-03-07 01:03:04.511728 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-07 01:03:04.511742 | orchestrator | Saturday 07 March 2026 01:02:59 +0000 (0:00:00.157) 0:02:52.230 ******** 2026-03-07 01:03:04.511751 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.511759 | orchestrator | 2026-03-07 01:03:04.511767 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-07 01:03:04.511775 | orchestrator | Saturday 07 March 2026 01:02:59 +0000 (0:00:00.692) 0:02:52.923 ******** 2026-03-07 01:03:04.511783 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:04.511791 | orchestrator | 2026-03-07 01:03:04.511799 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:04.511808 | orchestrator | Saturday 07 March 2026 01:03:02 +0000 (0:00:03.227) 0:02:56.151 ******** 2026-03-07 01:03:04.511815 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:04.511824 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:04.511832 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:04.511840 | orchestrator | 2026-03-07 01:03:04.511848 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:03:04.511857 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-07 01:03:04.511873 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:03:04.511882 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:03:04.511897 | orchestrator | 2026-03-07 01:03:04.511905 | orchestrator | 2026-03-07 01:03:04.511913 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:03:04.511921 | orchestrator | Saturday 07 March 2026 01:03:03 +0000 (0:00:00.480) 0:02:56.631 ******** 2026-03-07 01:03:04.511929 | orchestrator | =============================================================================== 2026-03-07 01:03:04.511937 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 26.00s 2026-03-07 01:03:04.511945 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.91s 2026-03-07 01:03:04.511955 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.05s 2026-03-07 01:03:04.511963 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.92s 2026-03-07 01:03:04.511971 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.82s 2026-03-07 01:03:04.511995 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.11s 2026-03-07 01:03:04.512003 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.22s 2026-03-07 01:03:04.512012 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.98s 2026-03-07 01:03:04.512020 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.83s 2026-03-07 01:03:04.512028 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.14s 2026-03-07 01:03:04.512037 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.65s 2026-03-07 01:03:04.512045 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.44s 2026-03-07 01:03:04.512053 | orchestrator | keystone : Creating default user role ----------------------------------- 3.23s 2026-03-07 01:03:04.512061 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.09s 2026-03-07 01:03:04.512070 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.56s 2026-03-07 01:03:04.512078 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.49s 2026-03-07 01:03:04.512086 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.29s 2026-03-07 01:03:04.512094 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.86s 2026-03-07 01:03:04.512102 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.79s 2026-03-07 01:03:04.512110 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.74s 2026-03-07 01:03:04.512118 | orchestrator | 2026-03-07 01:03:04 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:04.512126 | orchestrator | 2026-03-07 01:03:04 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:04.512134 | orchestrator | 2026-03-07 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:07.550221 | orchestrator | 2026-03-07 01:03:07 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:07.553644 | orchestrator | 2026-03-07 01:03:07 | INFO  | Task d035aea1-5786-4288-9182-85023a27b732 is in state STARTED 2026-03-07 01:03:07.553699 | orchestrator | 2026-03-07 01:03:07 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:07.558779 | orchestrator | 2026-03-07 01:03:07 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:07.559667 | orchestrator | 2026-03-07 01:03:07 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:07.559751 | orchestrator | 2026-03-07 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:10.601504 | orchestrator | 2026-03-07 01:03:10 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:10.602521 | orchestrator | 2026-03-07 01:03:10 | INFO  | Task d035aea1-5786-4288-9182-85023a27b732 is in state SUCCESS 2026-03-07 01:03:10.604302 | orchestrator | 2026-03-07 01:03:10 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:10.605645 | orchestrator | 2026-03-07 01:03:10 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:10.607237 | orchestrator | 2026-03-07 01:03:10 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:10.608736 | orchestrator | 2026-03-07 01:03:10 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:10.608767 | orchestrator | 2026-03-07 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:13.721953 | orchestrator | 2026-03-07 01:03:13 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:13.722111 | orchestrator | 2026-03-07 01:03:13 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:13.722123 | orchestrator | 2026-03-07 01:03:13 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:13.722129 | orchestrator | 2026-03-07 01:03:13 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:13.722135 | orchestrator | 2026-03-07 01:03:13 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:13.722141 | orchestrator | 2026-03-07 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:16.733057 | orchestrator | 2026-03-07 01:03:16 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:16.733772 | orchestrator | 2026-03-07 01:03:16 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:16.736308 | orchestrator | 2026-03-07 01:03:16 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:16.738204 | orchestrator | 2026-03-07 01:03:16 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:16.739358 | orchestrator | 2026-03-07 01:03:16 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:16.739427 | orchestrator | 2026-03-07 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:19.778755 | orchestrator | 2026-03-07 01:03:19 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:19.779170 | orchestrator | 2026-03-07 01:03:19 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:19.780783 | orchestrator | 2026-03-07 01:03:19 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:19.781252 | orchestrator | 2026-03-07 01:03:19 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:19.782574 | orchestrator | 2026-03-07 01:03:19 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:19.782632 | orchestrator | 2026-03-07 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:22.817693 | orchestrator | 2026-03-07 01:03:22 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:22.820348 | orchestrator | 2026-03-07 01:03:22 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:22.823393 | orchestrator | 2026-03-07 01:03:22 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:22.826944 | orchestrator | 2026-03-07 01:03:22 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:22.828383 | orchestrator | 2026-03-07 01:03:22 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:22.828421 | orchestrator | 2026-03-07 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:25.874197 | orchestrator | 2026-03-07 01:03:25 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:25.874721 | orchestrator | 2026-03-07 01:03:25 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:25.875952 | orchestrator | 2026-03-07 01:03:25 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:25.876661 | orchestrator | 2026-03-07 01:03:25 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:25.877606 | orchestrator | 2026-03-07 01:03:25 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:25.878892 | orchestrator | 2026-03-07 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:28.928264 | orchestrator | 2026-03-07 01:03:28 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:28.930552 | orchestrator | 2026-03-07 01:03:28 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:28.932651 | orchestrator | 2026-03-07 01:03:28 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:28.934239 | orchestrator | 2026-03-07 01:03:28 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:28.935957 | orchestrator | 2026-03-07 01:03:28 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:28.936072 | orchestrator | 2026-03-07 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:31.980805 | orchestrator | 2026-03-07 01:03:31 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:31.982407 | orchestrator | 2026-03-07 01:03:31 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:31.982700 | orchestrator | 2026-03-07 01:03:31 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:31.984186 | orchestrator | 2026-03-07 01:03:31 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:31.987956 | orchestrator | 2026-03-07 01:03:31 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:31.988055 | orchestrator | 2026-03-07 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:35.043165 | orchestrator | 2026-03-07 01:03:35 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:35.046271 | orchestrator | 2026-03-07 01:03:35 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:35.051051 | orchestrator | 2026-03-07 01:03:35 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:35.054572 | orchestrator | 2026-03-07 01:03:35 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:35.057203 | orchestrator | 2026-03-07 01:03:35 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:35.057556 | orchestrator | 2026-03-07 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:38.096801 | orchestrator | 2026-03-07 01:03:38 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:38.097384 | orchestrator | 2026-03-07 01:03:38 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:38.098126 | orchestrator | 2026-03-07 01:03:38 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:38.099013 | orchestrator | 2026-03-07 01:03:38 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:38.099705 | orchestrator | 2026-03-07 01:03:38 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:38.099737 | orchestrator | 2026-03-07 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:41.136342 | orchestrator | 2026-03-07 01:03:41 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:41.138386 | orchestrator | 2026-03-07 01:03:41 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:41.139399 | orchestrator | 2026-03-07 01:03:41 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:41.141169 | orchestrator | 2026-03-07 01:03:41 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:41.143110 | orchestrator | 2026-03-07 01:03:41 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:41.143153 | orchestrator | 2026-03-07 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:44.191844 | orchestrator | 2026-03-07 01:03:44 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:44.192878 | orchestrator | 2026-03-07 01:03:44 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:44.194449 | orchestrator | 2026-03-07 01:03:44 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:44.195521 | orchestrator | 2026-03-07 01:03:44 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:44.197409 | orchestrator | 2026-03-07 01:03:44 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:44.197477 | orchestrator | 2026-03-07 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:47.237435 | orchestrator | 2026-03-07 01:03:47 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:47.249060 | orchestrator | 2026-03-07 01:03:47 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:47.249130 | orchestrator | 2026-03-07 01:03:47 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:47.249135 | orchestrator | 2026-03-07 01:03:47 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:47.249140 | orchestrator | 2026-03-07 01:03:47 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:47.249145 | orchestrator | 2026-03-07 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:50.295104 | orchestrator | 2026-03-07 01:03:50 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:50.298608 | orchestrator | 2026-03-07 01:03:50 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:50.298692 | orchestrator | 2026-03-07 01:03:50 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:50.302622 | orchestrator | 2026-03-07 01:03:50 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:50.305425 | orchestrator | 2026-03-07 01:03:50 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:50.305463 | orchestrator | 2026-03-07 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:53.351813 | orchestrator | 2026-03-07 01:03:53 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:53.351984 | orchestrator | 2026-03-07 01:03:53 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:53.352113 | orchestrator | 2026-03-07 01:03:53 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:53.352131 | orchestrator | 2026-03-07 01:03:53 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:53.352147 | orchestrator | 2026-03-07 01:03:53 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:53.352163 | orchestrator | 2026-03-07 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:56.385573 | orchestrator | 2026-03-07 01:03:56 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:56.386988 | orchestrator | 2026-03-07 01:03:56 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:56.391604 | orchestrator | 2026-03-07 01:03:56 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:56.392961 | orchestrator | 2026-03-07 01:03:56 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:56.394196 | orchestrator | 2026-03-07 01:03:56 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:56.394234 | orchestrator | 2026-03-07 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:59.438839 | orchestrator | 2026-03-07 01:03:59 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:03:59.439972 | orchestrator | 2026-03-07 01:03:59 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:03:59.441245 | orchestrator | 2026-03-07 01:03:59 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:03:59.442502 | orchestrator | 2026-03-07 01:03:59 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:03:59.443715 | orchestrator | 2026-03-07 01:03:59 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:03:59.444820 | orchestrator | 2026-03-07 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:02.500496 | orchestrator | 2026-03-07 01:04:02 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:02.501482 | orchestrator | 2026-03-07 01:04:02 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:02.502848 | orchestrator | 2026-03-07 01:04:02 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:02.503935 | orchestrator | 2026-03-07 01:04:02 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:04:02.505163 | orchestrator | 2026-03-07 01:04:02 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:02.505236 | orchestrator | 2026-03-07 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:05.556914 | orchestrator | 2026-03-07 01:04:05 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:05.557816 | orchestrator | 2026-03-07 01:04:05 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:05.559362 | orchestrator | 2026-03-07 01:04:05 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:05.560668 | orchestrator | 2026-03-07 01:04:05 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:04:05.561809 | orchestrator | 2026-03-07 01:04:05 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:05.562093 | orchestrator | 2026-03-07 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:08.621048 | orchestrator | 2026-03-07 01:04:08 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:08.622076 | orchestrator | 2026-03-07 01:04:08 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:08.623756 | orchestrator | 2026-03-07 01:04:08 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:08.625230 | orchestrator | 2026-03-07 01:04:08 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:04:08.626624 | orchestrator | 2026-03-07 01:04:08 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:08.626773 | orchestrator | 2026-03-07 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:11.665107 | orchestrator | 2026-03-07 01:04:11 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:11.665808 | orchestrator | 2026-03-07 01:04:11 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:11.666822 | orchestrator | 2026-03-07 01:04:11 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:11.669069 | orchestrator | 2026-03-07 01:04:11 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:04:11.670150 | orchestrator | 2026-03-07 01:04:11 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:11.670213 | orchestrator | 2026-03-07 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:14.714601 | orchestrator | 2026-03-07 01:04:14 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:14.715729 | orchestrator | 2026-03-07 01:04:14 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:14.717861 | orchestrator | 2026-03-07 01:04:14 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:14.719613 | orchestrator | 2026-03-07 01:04:14 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:04:14.721619 | orchestrator | 2026-03-07 01:04:14 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:14.721656 | orchestrator | 2026-03-07 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:17.774692 | orchestrator | 2026-03-07 01:04:17 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:17.774791 | orchestrator | 2026-03-07 01:04:17 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:17.775625 | orchestrator | 2026-03-07 01:04:17 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:17.776467 | orchestrator | 2026-03-07 01:04:17 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:04:17.777716 | orchestrator | 2026-03-07 01:04:17 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:17.777783 | orchestrator | 2026-03-07 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:20.811410 | orchestrator | 2026-03-07 01:04:20 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:20.811537 | orchestrator | 2026-03-07 01:04:20 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:20.812277 | orchestrator | 2026-03-07 01:04:20 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:20.813762 | orchestrator | 2026-03-07 01:04:20 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:04:20.815590 | orchestrator | 2026-03-07 01:04:20 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:20.815691 | orchestrator | 2026-03-07 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:23.851120 | orchestrator | 2026-03-07 01:04:23 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:23.851676 | orchestrator | 2026-03-07 01:04:23 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:23.852359 | orchestrator | 2026-03-07 01:04:23 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:23.853162 | orchestrator | 2026-03-07 01:04:23 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:04:23.854313 | orchestrator | 2026-03-07 01:04:23 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:23.854380 | orchestrator | 2026-03-07 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:26.895340 | orchestrator | 2026-03-07 01:04:26 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:26.896426 | orchestrator | 2026-03-07 01:04:26 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:26.898173 | orchestrator | 2026-03-07 01:04:26 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:26.900196 | orchestrator | 2026-03-07 01:04:26 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state STARTED 2026-03-07 01:04:26.901420 | orchestrator | 2026-03-07 01:04:26 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:26.901475 | orchestrator | 2026-03-07 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:29.945117 | orchestrator | 2026-03-07 01:04:29 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:29.946291 | orchestrator | 2026-03-07 01:04:29 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:29.947285 | orchestrator | 2026-03-07 01:04:29 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:29.948615 | orchestrator | 2026-03-07 01:04:29 | INFO  | Task 5eeb1760-d11a-40c7-b12d-d67e51fe7efe is in state SUCCESS 2026-03-07 01:04:29.949634 | orchestrator | 2026-03-07 01:04:29 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:29.949687 | orchestrator | 2026-03-07 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:32.990881 | orchestrator | 2026-03-07 01:04:32 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:32.993579 | orchestrator | 2026-03-07 01:04:32 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:32.994256 | orchestrator | 2026-03-07 01:04:32 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:32.995107 | orchestrator | 2026-03-07 01:04:32 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:32.995154 | orchestrator | 2026-03-07 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:36.035352 | orchestrator | 2026-03-07 01:04:36 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:36.036429 | orchestrator | 2026-03-07 01:04:36 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:36.039533 | orchestrator | 2026-03-07 01:04:36 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:36.040490 | orchestrator | 2026-03-07 01:04:36 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:36.040556 | orchestrator | 2026-03-07 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:39.169376 | orchestrator | 2026-03-07 01:04:39 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:39.170329 | orchestrator | 2026-03-07 01:04:39 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:39.171631 | orchestrator | 2026-03-07 01:04:39 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:39.173055 | orchestrator | 2026-03-07 01:04:39 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:39.173114 | orchestrator | 2026-03-07 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:42.214264 | orchestrator | 2026-03-07 01:04:42 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:42.215082 | orchestrator | 2026-03-07 01:04:42 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:42.216070 | orchestrator | 2026-03-07 01:04:42 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:42.217570 | orchestrator | 2026-03-07 01:04:42 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:42.217593 | orchestrator | 2026-03-07 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:45.255348 | orchestrator | 2026-03-07 01:04:45 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:45.257045 | orchestrator | 2026-03-07 01:04:45 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:45.259054 | orchestrator | 2026-03-07 01:04:45 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:45.260930 | orchestrator | 2026-03-07 01:04:45 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:45.261004 | orchestrator | 2026-03-07 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:48.301741 | orchestrator | 2026-03-07 01:04:48 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:48.301880 | orchestrator | 2026-03-07 01:04:48 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:48.304142 | orchestrator | 2026-03-07 01:04:48 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:48.306491 | orchestrator | 2026-03-07 01:04:48 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:48.306582 | orchestrator | 2026-03-07 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:51.340542 | orchestrator | 2026-03-07 01:04:51 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:51.341527 | orchestrator | 2026-03-07 01:04:51 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:51.342642 | orchestrator | 2026-03-07 01:04:51 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:51.343405 | orchestrator | 2026-03-07 01:04:51 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:51.343436 | orchestrator | 2026-03-07 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:54.375790 | orchestrator | 2026-03-07 01:04:54 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:54.377343 | orchestrator | 2026-03-07 01:04:54 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:54.378590 | orchestrator | 2026-03-07 01:04:54 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:54.379468 | orchestrator | 2026-03-07 01:04:54 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:54.379499 | orchestrator | 2026-03-07 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:57.427954 | orchestrator | 2026-03-07 01:04:57 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:04:57.428752 | orchestrator | 2026-03-07 01:04:57 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:04:57.429670 | orchestrator | 2026-03-07 01:04:57 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:04:57.431112 | orchestrator | 2026-03-07 01:04:57 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:04:57.431156 | orchestrator | 2026-03-07 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:00.503006 | orchestrator | 2026-03-07 01:05:00 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:00.512413 | orchestrator | 2026-03-07 01:05:00 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:00.515314 | orchestrator | 2026-03-07 01:05:00 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:00.516542 | orchestrator | 2026-03-07 01:05:00 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:00.516589 | orchestrator | 2026-03-07 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:03.552159 | orchestrator | 2026-03-07 01:05:03 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:03.553265 | orchestrator | 2026-03-07 01:05:03 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:03.554219 | orchestrator | 2026-03-07 01:05:03 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:03.555199 | orchestrator | 2026-03-07 01:05:03 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:03.555240 | orchestrator | 2026-03-07 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:06.605061 | orchestrator | 2026-03-07 01:05:06 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:06.605297 | orchestrator | 2026-03-07 01:05:06 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:06.607552 | orchestrator | 2026-03-07 01:05:06 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:06.610410 | orchestrator | 2026-03-07 01:05:06 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:06.610494 | orchestrator | 2026-03-07 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:09.655952 | orchestrator | 2026-03-07 01:05:09 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:09.656029 | orchestrator | 2026-03-07 01:05:09 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:09.656898 | orchestrator | 2026-03-07 01:05:09 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:09.657752 | orchestrator | 2026-03-07 01:05:09 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:09.657810 | orchestrator | 2026-03-07 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:12.683256 | orchestrator | 2026-03-07 01:05:12 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:12.684171 | orchestrator | 2026-03-07 01:05:12 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:12.684854 | orchestrator | 2026-03-07 01:05:12 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:12.685494 | orchestrator | 2026-03-07 01:05:12 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:12.686143 | orchestrator | 2026-03-07 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:15.722124 | orchestrator | 2026-03-07 01:05:15 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:15.722317 | orchestrator | 2026-03-07 01:05:15 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:15.723390 | orchestrator | 2026-03-07 01:05:15 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:15.725938 | orchestrator | 2026-03-07 01:05:15 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:15.726053 | orchestrator | 2026-03-07 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:18.767157 | orchestrator | 2026-03-07 01:05:18 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:18.767870 | orchestrator | 2026-03-07 01:05:18 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:18.769793 | orchestrator | 2026-03-07 01:05:18 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:18.771918 | orchestrator | 2026-03-07 01:05:18 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:18.771982 | orchestrator | 2026-03-07 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:21.797900 | orchestrator | 2026-03-07 01:05:21 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:21.799628 | orchestrator | 2026-03-07 01:05:21 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:21.802870 | orchestrator | 2026-03-07 01:05:21 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:21.803562 | orchestrator | 2026-03-07 01:05:21 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:21.803612 | orchestrator | 2026-03-07 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:24.854289 | orchestrator | 2026-03-07 01:05:24 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:24.855350 | orchestrator | 2026-03-07 01:05:24 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:24.856554 | orchestrator | 2026-03-07 01:05:24 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:24.858873 | orchestrator | 2026-03-07 01:05:24 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:24.859006 | orchestrator | 2026-03-07 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:27.889820 | orchestrator | 2026-03-07 01:05:27 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:27.891559 | orchestrator | 2026-03-07 01:05:27 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:27.892307 | orchestrator | 2026-03-07 01:05:27 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:27.893146 | orchestrator | 2026-03-07 01:05:27 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:27.893168 | orchestrator | 2026-03-07 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:30.960013 | orchestrator | 2026-03-07 01:05:30 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:30.961161 | orchestrator | 2026-03-07 01:05:30 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:30.962400 | orchestrator | 2026-03-07 01:05:30 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:30.963707 | orchestrator | 2026-03-07 01:05:30 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:30.963852 | orchestrator | 2026-03-07 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:33.990828 | orchestrator | 2026-03-07 01:05:33 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:33.991469 | orchestrator | 2026-03-07 01:05:33 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:33.992675 | orchestrator | 2026-03-07 01:05:33 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:33.993588 | orchestrator | 2026-03-07 01:05:33 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:33.993846 | orchestrator | 2026-03-07 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:37.096843 | orchestrator | 2026-03-07 01:05:37 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:37.096920 | orchestrator | 2026-03-07 01:05:37 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:37.096929 | orchestrator | 2026-03-07 01:05:37 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:37.096934 | orchestrator | 2026-03-07 01:05:37 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:37.096940 | orchestrator | 2026-03-07 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:40.110097 | orchestrator | 2026-03-07 01:05:40 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:40.110511 | orchestrator | 2026-03-07 01:05:40 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:40.112334 | orchestrator | 2026-03-07 01:05:40 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:40.112360 | orchestrator | 2026-03-07 01:05:40 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:40.112368 | orchestrator | 2026-03-07 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:43.140671 | orchestrator | 2026-03-07 01:05:43 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:43.141527 | orchestrator | 2026-03-07 01:05:43 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state STARTED 2026-03-07 01:05:43.142580 | orchestrator | 2026-03-07 01:05:43 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:43.144282 | orchestrator | 2026-03-07 01:05:43 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:43.144339 | orchestrator | 2026-03-07 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:46.177635 | orchestrator | 2026-03-07 01:05:46 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:46.180134 | orchestrator | 2026-03-07 01:05:46 | INFO  | Task 8ce8127b-7ddb-4641-b548-1945b09efb54 is in state SUCCESS 2026-03-07 01:05:46.181149 | orchestrator | 2026-03-07 01:05:46.181200 | orchestrator | 2026-03-07 01:05:46.181209 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:05:46.181217 | orchestrator | 2026-03-07 01:05:46.181223 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:05:46.181227 | orchestrator | Saturday 07 March 2026 01:03:06 +0000 (0:00:00.255) 0:00:00.255 ******** 2026-03-07 01:05:46.181231 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:05:46.181237 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:05:46.181241 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:05:46.181245 | orchestrator | 2026-03-07 01:05:46.181249 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:05:46.181253 | orchestrator | Saturday 07 March 2026 01:03:06 +0000 (0:00:00.635) 0:00:00.891 ******** 2026-03-07 01:05:46.181257 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-07 01:05:46.181261 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-07 01:05:46.181265 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-07 01:05:46.181269 | orchestrator | 2026-03-07 01:05:46.181273 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-07 01:05:46.181276 | orchestrator | 2026-03-07 01:05:46.181280 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-07 01:05:46.181284 | orchestrator | Saturday 07 March 2026 01:03:07 +0000 (0:00:01.191) 0:00:02.083 ******** 2026-03-07 01:05:46.181289 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:05:46.181295 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:05:46.181300 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:05:46.181306 | orchestrator | 2026-03-07 01:05:46.181311 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:05:46.181318 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:05:46.181325 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:05:46.181331 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:05:46.181337 | orchestrator | 2026-03-07 01:05:46.181343 | orchestrator | 2026-03-07 01:05:46.181349 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:05:46.181354 | orchestrator | Saturday 07 March 2026 01:03:08 +0000 (0:00:00.909) 0:00:02.992 ******** 2026-03-07 01:05:46.181360 | orchestrator | =============================================================================== 2026-03-07 01:05:46.181366 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.19s 2026-03-07 01:05:46.181372 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.91s 2026-03-07 01:05:46.181377 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.64s 2026-03-07 01:05:46.181383 | orchestrator | 2026-03-07 01:05:46.181389 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-07 01:05:46.181396 | orchestrator | 2.16.14 2026-03-07 01:05:46.181402 | orchestrator | 2026-03-07 01:05:46.181409 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-07 01:05:46.181415 | orchestrator | 2026-03-07 01:05:46.181421 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-07 01:05:46.181427 | orchestrator | Saturday 07 March 2026 01:03:05 +0000 (0:00:00.307) 0:00:00.307 ******** 2026-03-07 01:05:46.181433 | orchestrator | changed: [testbed-manager] 2026-03-07 01:05:46.181440 | orchestrator | 2026-03-07 01:05:46.181447 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-07 01:05:46.181453 | orchestrator | Saturday 07 March 2026 01:03:07 +0000 (0:00:02.102) 0:00:02.410 ******** 2026-03-07 01:05:46.181479 | orchestrator | changed: [testbed-manager] 2026-03-07 01:05:46.181485 | orchestrator | 2026-03-07 01:05:46.181489 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-07 01:05:46.181493 | orchestrator | Saturday 07 March 2026 01:03:08 +0000 (0:00:01.252) 0:00:03.662 ******** 2026-03-07 01:05:46.181497 | orchestrator | changed: [testbed-manager] 2026-03-07 01:05:46.181500 | orchestrator | 2026-03-07 01:05:46.181504 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-07 01:05:46.181509 | orchestrator | Saturday 07 March 2026 01:03:10 +0000 (0:00:01.126) 0:00:04.789 ******** 2026-03-07 01:05:46.181514 | orchestrator | changed: [testbed-manager] 2026-03-07 01:05:46.181520 | orchestrator | 2026-03-07 01:05:46.181526 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-07 01:05:46.181532 | orchestrator | Saturday 07 March 2026 01:03:11 +0000 (0:00:01.534) 0:00:06.323 ******** 2026-03-07 01:05:46.181538 | orchestrator | changed: [testbed-manager] 2026-03-07 01:05:46.181547 | orchestrator | 2026-03-07 01:05:46.181554 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-07 01:05:46.181562 | orchestrator | Saturday 07 March 2026 01:03:13 +0000 (0:00:01.658) 0:00:07.982 ******** 2026-03-07 01:05:46.181567 | orchestrator | changed: [testbed-manager] 2026-03-07 01:05:46.181573 | orchestrator | 2026-03-07 01:05:46.181579 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-07 01:05:46.181585 | orchestrator | Saturday 07 March 2026 01:03:14 +0000 (0:00:01.319) 0:00:09.302 ******** 2026-03-07 01:05:46.181934 | orchestrator | changed: [testbed-manager] 2026-03-07 01:05:46.182366 | orchestrator | 2026-03-07 01:05:46.182389 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-07 01:05:46.182395 | orchestrator | Saturday 07 March 2026 01:03:16 +0000 (0:00:02.006) 0:00:11.309 ******** 2026-03-07 01:05:46.182399 | orchestrator | changed: [testbed-manager] 2026-03-07 01:05:46.182403 | orchestrator | 2026-03-07 01:05:46.182408 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-07 01:05:46.182413 | orchestrator | Saturday 07 March 2026 01:03:18 +0000 (0:00:01.693) 0:00:13.002 ******** 2026-03-07 01:05:46.182416 | orchestrator | changed: [testbed-manager] 2026-03-07 01:05:46.182420 | orchestrator | 2026-03-07 01:05:46.182456 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-07 01:05:46.182462 | orchestrator | Saturday 07 March 2026 01:04:03 +0000 (0:00:45.683) 0:00:58.685 ******** 2026-03-07 01:05:46.182467 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:05:46.182479 | orchestrator | 2026-03-07 01:05:46.182511 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-07 01:05:46.182525 | orchestrator | 2026-03-07 01:05:46.182531 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-07 01:05:46.182537 | orchestrator | Saturday 07 March 2026 01:04:04 +0000 (0:00:00.226) 0:00:58.912 ******** 2026-03-07 01:05:46.182543 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:05:46.182549 | orchestrator | 2026-03-07 01:05:46.182569 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-07 01:05:46.182576 | orchestrator | 2026-03-07 01:05:46.182583 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-07 01:05:46.182590 | orchestrator | Saturday 07 March 2026 01:04:16 +0000 (0:00:11.781) 0:01:10.694 ******** 2026-03-07 01:05:46.182596 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:05:46.182604 | orchestrator | 2026-03-07 01:05:46.182610 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-07 01:05:46.182616 | orchestrator | 2026-03-07 01:05:46.182622 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-07 01:05:46.182628 | orchestrator | Saturday 07 March 2026 01:04:17 +0000 (0:00:01.360) 0:01:12.055 ******** 2026-03-07 01:05:46.182635 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:05:46.182641 | orchestrator | 2026-03-07 01:05:46.182648 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:05:46.182672 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 01:05:46.182682 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:05:46.182689 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:05:46.182695 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:05:46.182701 | orchestrator | 2026-03-07 01:05:46.182707 | orchestrator | 2026-03-07 01:05:46.182711 | orchestrator | 2026-03-07 01:05:46.182715 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:05:46.182719 | orchestrator | Saturday 07 March 2026 01:04:28 +0000 (0:00:11.283) 0:01:23.338 ******** 2026-03-07 01:05:46.182722 | orchestrator | =============================================================================== 2026-03-07 01:05:46.182727 | orchestrator | Create admin user ------------------------------------------------------ 45.68s 2026-03-07 01:05:46.182731 | orchestrator | Restart ceph manager service ------------------------------------------- 24.43s 2026-03-07 01:05:46.182734 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.10s 2026-03-07 01:05:46.182738 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.01s 2026-03-07 01:05:46.182742 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.69s 2026-03-07 01:05:46.182746 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.66s 2026-03-07 01:05:46.182750 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.53s 2026-03-07 01:05:46.182753 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.32s 2026-03-07 01:05:46.182757 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.25s 2026-03-07 01:05:46.182761 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.13s 2026-03-07 01:05:46.182765 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.23s 2026-03-07 01:05:46.182768 | orchestrator | 2026-03-07 01:05:46.182772 | orchestrator | 2026-03-07 01:05:46.182776 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:05:46.182780 | orchestrator | 2026-03-07 01:05:46.182784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:05:46.182787 | orchestrator | Saturday 07 March 2026 01:03:10 +0000 (0:00:00.627) 0:00:00.627 ******** 2026-03-07 01:05:46.182792 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:05:46.182796 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:05:46.182800 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:05:46.182804 | orchestrator | 2026-03-07 01:05:46.182809 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:05:46.182815 | orchestrator | Saturday 07 March 2026 01:03:11 +0000 (0:00:00.809) 0:00:01.437 ******** 2026-03-07 01:05:46.182821 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-07 01:05:46.182830 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-07 01:05:46.182838 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-07 01:05:46.182845 | orchestrator | 2026-03-07 01:05:46.182851 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-07 01:05:46.182857 | orchestrator | 2026-03-07 01:05:46.182863 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-07 01:05:46.182868 | orchestrator | Saturday 07 March 2026 01:03:13 +0000 (0:00:01.961) 0:00:03.399 ******** 2026-03-07 01:05:46.182875 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:05:46.182889 | orchestrator | 2026-03-07 01:05:46.182895 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-07 01:05:46.182901 | orchestrator | Saturday 07 March 2026 01:03:14 +0000 (0:00:01.205) 0:00:04.604 ******** 2026-03-07 01:05:46.182914 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-07 01:05:46.182921 | orchestrator | 2026-03-07 01:05:46.182941 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-07 01:05:46.182948 | orchestrator | Saturday 07 March 2026 01:03:19 +0000 (0:00:04.419) 0:00:09.024 ******** 2026-03-07 01:05:46.182955 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-07 01:05:46.182962 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-07 01:05:46.182968 | orchestrator | 2026-03-07 01:05:46.182975 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-07 01:05:46.182981 | orchestrator | Saturday 07 March 2026 01:03:26 +0000 (0:00:07.405) 0:00:16.430 ******** 2026-03-07 01:05:46.182989 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-07 01:05:46.182993 | orchestrator | 2026-03-07 01:05:46.182998 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-07 01:05:46.183002 | orchestrator | Saturday 07 March 2026 01:03:30 +0000 (0:00:03.632) 0:00:20.062 ******** 2026-03-07 01:05:46.183007 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-07 01:05:46.183012 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:05:46.183016 | orchestrator | 2026-03-07 01:05:46.183021 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-07 01:05:46.183025 | orchestrator | Saturday 07 March 2026 01:03:33 +0000 (0:00:03.682) 0:00:23.744 ******** 2026-03-07 01:05:46.183030 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:05:46.183037 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-07 01:05:46.183043 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-07 01:05:46.183048 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-07 01:05:46.183079 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-07 01:05:46.183084 | orchestrator | 2026-03-07 01:05:46.183090 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-07 01:05:46.183097 | orchestrator | Saturday 07 March 2026 01:03:50 +0000 (0:00:16.286) 0:00:40.031 ******** 2026-03-07 01:05:46.183106 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-07 01:05:46.183113 | orchestrator | 2026-03-07 01:05:46.183118 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-07 01:05:46.183124 | orchestrator | Saturday 07 March 2026 01:03:55 +0000 (0:00:05.241) 0:00:45.272 ******** 2026-03-07 01:05:46.183134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183234 | orchestrator | 2026-03-07 01:05:46.183240 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-07 01:05:46.183247 | orchestrator | Saturday 07 March 2026 01:03:58 +0000 (0:00:02.907) 0:00:48.187 ******** 2026-03-07 01:05:46.183254 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-07 01:05:46.183260 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-07 01:05:46.183267 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-07 01:05:46.183273 | orchestrator | 2026-03-07 01:05:46.183279 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-07 01:05:46.183285 | orchestrator | Saturday 07 March 2026 01:04:01 +0000 (0:00:03.094) 0:00:51.282 ******** 2026-03-07 01:05:46.183291 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:05:46.183311 | orchestrator | 2026-03-07 01:05:46.183317 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-07 01:05:46.183324 | orchestrator | Saturday 07 March 2026 01:04:01 +0000 (0:00:00.149) 0:00:51.432 ******** 2026-03-07 01:05:46.183330 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:05:46.183336 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:05:46.183342 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:05:46.183349 | orchestrator | 2026-03-07 01:05:46.183356 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-07 01:05:46.183361 | orchestrator | Saturday 07 March 2026 01:04:02 +0000 (0:00:00.659) 0:00:52.091 ******** 2026-03-07 01:05:46.183365 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:05:46.183369 | orchestrator | 2026-03-07 01:05:46.183373 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-07 01:05:46.183377 | orchestrator | Saturday 07 March 2026 01:04:02 +0000 (0:00:00.720) 0:00:52.811 ******** 2026-03-07 01:05:46.183381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183565 | orchestrator | 2026-03-07 01:05:46.183570 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-07 01:05:46.183575 | orchestrator | Saturday 07 March 2026 01:04:07 +0000 (0:00:05.169) 0:00:57.981 ******** 2026-03-07 01:05:46.183592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:05:46.183599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183624 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:05:46.183630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:05:46.183637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183659 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:05:46.183665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:05:46.183671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183688 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:05:46.183694 | orchestrator | 2026-03-07 01:05:46.183699 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-07 01:05:46.183706 | orchestrator | Saturday 07 March 2026 01:04:10 +0000 (0:00:02.717) 0:01:00.699 ******** 2026-03-07 01:05:46.183714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:05:46.183721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183733 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:05:46.183737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:05:46.183745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183753 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:05:46.183757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:05:46.183770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.183778 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:05:46.183786 | orchestrator | 2026-03-07 01:05:46.183791 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-07 01:05:46.183798 | orchestrator | Saturday 07 March 2026 01:04:12 +0000 (0:00:02.137) 0:01:02.836 ******** 2026-03-07 01:05:46.183803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183861 | orchestrator | 2026-03-07 01:05:46.183865 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-07 01:05:46.183869 | orchestrator | Saturday 07 March 2026 01:04:16 +0000 (0:00:03.736) 0:01:06.572 ******** 2026-03-07 01:05:46.183873 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:05:46.183877 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:05:46.183880 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:05:46.183884 | orchestrator | 2026-03-07 01:05:46.183888 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-07 01:05:46.183891 | orchestrator | Saturday 07 March 2026 01:04:20 +0000 (0:00:04.366) 0:01:10.939 ******** 2026-03-07 01:05:46.183895 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:05:46.183899 | orchestrator | 2026-03-07 01:05:46.183903 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-07 01:05:46.183910 | orchestrator | Saturday 07 March 2026 01:04:22 +0000 (0:00:01.602) 0:01:12.542 ******** 2026-03-07 01:05:46.183916 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:05:46.183920 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:05:46.183929 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:05:46.183932 | orchestrator | 2026-03-07 01:05:46.183937 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-07 01:05:46.183940 | orchestrator | Saturday 07 March 2026 01:04:24 +0000 (0:00:02.007) 0:01:14.549 ******** 2026-03-07 01:05:46.183944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.183956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.183992 | orchestrator | 2026-03-07 01:05:46.183996 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-07 01:05:46.184000 | orchestrator | Saturday 07 March 2026 01:04:41 +0000 (0:00:17.088) 0:01:31.638 ******** 2026-03-07 01:05:46.184004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:05:46.184017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.184022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.184028 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:05:46.184035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:05:46.184041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.184049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.184084 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:05:46.184096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:05:46.184114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.184121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:05:46.184127 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:05:46.184133 | orchestrator | 2026-03-07 01:05:46.184139 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-07 01:05:46.184145 | orchestrator | Saturday 07 March 2026 01:04:43 +0000 (0:00:01.487) 0:01:33.125 ******** 2026-03-07 01:05:46.184151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.184158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.184178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:05:46.184184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.184188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.184192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.184196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.184200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.184208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:05:46.184212 | orchestrator | 2026-03-07 01:05:46.184216 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-07 01:05:46.184220 | orchestrator | Saturday 07 March 2026 01:04:48 +0000 (0:00:05.269) 0:01:38.394 ******** 2026-03-07 01:05:46.184224 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:05:46.184228 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:05:46.184236 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:05:46.184243 | orchestrator | 2026-03-07 01:05:46.184252 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-07 01:05:46.184258 | orchestrator | Saturday 07 March 2026 01:04:48 +0000 (0:00:00.515) 0:01:38.910 ******** 2026-03-07 01:05:46.184264 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:05:46.184316 | orchestrator | 2026-03-07 01:05:46.184325 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-07 01:05:46.184331 | orchestrator | Saturday 07 March 2026 01:04:51 +0000 (0:00:02.375) 0:01:41.285 ******** 2026-03-07 01:05:46.184337 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:05:46.184344 | orchestrator | 2026-03-07 01:05:46.184350 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-07 01:05:46.184356 | orchestrator | Saturday 07 March 2026 01:04:53 +0000 (0:00:02.221) 0:01:43.506 ******** 2026-03-07 01:05:46.184363 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:05:46.184369 | orchestrator | 2026-03-07 01:05:46.184376 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-07 01:05:46.184382 | orchestrator | Saturday 07 March 2026 01:05:04 +0000 (0:00:10.677) 0:01:54.184 ******** 2026-03-07 01:05:46.184388 | orchestrator | 2026-03-07 01:05:46.184394 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-07 01:05:46.184401 | orchestrator | Saturday 07 March 2026 01:05:04 +0000 (0:00:00.145) 0:01:54.330 ******** 2026-03-07 01:05:46.184408 | orchestrator | 2026-03-07 01:05:46.184415 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-07 01:05:46.184420 | orchestrator | Saturday 07 March 2026 01:05:04 +0000 (0:00:00.093) 0:01:54.423 ******** 2026-03-07 01:05:46.184424 | orchestrator | 2026-03-07 01:05:46.184428 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-07 01:05:46.184432 | orchestrator | Saturday 07 March 2026 01:05:04 +0000 (0:00:00.093) 0:01:54.516 ******** 2026-03-07 01:05:46.184436 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:05:46.184440 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:05:46.184443 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:05:46.184447 | orchestrator | 2026-03-07 01:05:46.184451 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-07 01:05:46.184455 | orchestrator | Saturday 07 March 2026 01:05:20 +0000 (0:00:16.067) 0:02:10.584 ******** 2026-03-07 01:05:46.184459 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:05:46.184463 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:05:46.184466 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:05:46.184470 | orchestrator | 2026-03-07 01:05:46.184474 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-07 01:05:46.184478 | orchestrator | Saturday 07 March 2026 01:05:33 +0000 (0:00:12.573) 0:02:23.158 ******** 2026-03-07 01:05:46.184482 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:05:46.184491 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:05:46.184495 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:05:46.184498 | orchestrator | 2026-03-07 01:05:46.184502 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:05:46.184507 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:05:46.184512 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:05:46.184516 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:05:46.184520 | orchestrator | 2026-03-07 01:05:46.184524 | orchestrator | 2026-03-07 01:05:46.184527 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:05:46.184531 | orchestrator | Saturday 07 March 2026 01:05:45 +0000 (0:00:11.929) 0:02:35.087 ******** 2026-03-07 01:05:46.184535 | orchestrator | =============================================================================== 2026-03-07 01:05:46.184539 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 17.09s 2026-03-07 01:05:46.184543 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.29s 2026-03-07 01:05:46.184547 | orchestrator | barbican : Restart barbican-api container ------------------------------ 16.07s 2026-03-07 01:05:46.184550 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.57s 2026-03-07 01:05:46.184554 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.93s 2026-03-07 01:05:46.184558 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.68s 2026-03-07 01:05:46.184562 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.40s 2026-03-07 01:05:46.184566 | orchestrator | barbican : Check barbican containers ------------------------------------ 5.27s 2026-03-07 01:05:46.184570 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.24s 2026-03-07 01:05:46.184574 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 5.17s 2026-03-07 01:05:46.184578 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.43s 2026-03-07 01:05:46.184584 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.37s 2026-03-07 01:05:46.184590 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.74s 2026-03-07 01:05:46.184600 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.68s 2026-03-07 01:05:46.184607 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.63s 2026-03-07 01:05:46.184613 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 3.09s 2026-03-07 01:05:46.184623 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.92s 2026-03-07 01:05:46.184730 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.72s 2026-03-07 01:05:46.184737 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.38s 2026-03-07 01:05:46.184741 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.22s 2026-03-07 01:05:46.184745 | orchestrator | 2026-03-07 01:05:46 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:46.186333 | orchestrator | 2026-03-07 01:05:46 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:46.186389 | orchestrator | 2026-03-07 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:49.236148 | orchestrator | 2026-03-07 01:05:49 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:49.238963 | orchestrator | 2026-03-07 01:05:49 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:05:49.240249 | orchestrator | 2026-03-07 01:05:49 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:49.242403 | orchestrator | 2026-03-07 01:05:49 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:49.242446 | orchestrator | 2026-03-07 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:52.285158 | orchestrator | 2026-03-07 01:05:52 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:52.287837 | orchestrator | 2026-03-07 01:05:52 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:05:52.289041 | orchestrator | 2026-03-07 01:05:52 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:52.289748 | orchestrator | 2026-03-07 01:05:52 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:52.289773 | orchestrator | 2026-03-07 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:55.331354 | orchestrator | 2026-03-07 01:05:55 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:55.331543 | orchestrator | 2026-03-07 01:05:55 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:05:55.332180 | orchestrator | 2026-03-07 01:05:55 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:55.332733 | orchestrator | 2026-03-07 01:05:55 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:55.332839 | orchestrator | 2026-03-07 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:58.365119 | orchestrator | 2026-03-07 01:05:58 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:05:58.365502 | orchestrator | 2026-03-07 01:05:58 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:05:58.366470 | orchestrator | 2026-03-07 01:05:58 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:05:58.368124 | orchestrator | 2026-03-07 01:05:58 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:05:58.368244 | orchestrator | 2026-03-07 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:01.496627 | orchestrator | 2026-03-07 01:06:01 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:01.497935 | orchestrator | 2026-03-07 01:06:01 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:01.498680 | orchestrator | 2026-03-07 01:06:01 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:01.499721 | orchestrator | 2026-03-07 01:06:01 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:01.499756 | orchestrator | 2026-03-07 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:04.554048 | orchestrator | 2026-03-07 01:06:04 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:04.555970 | orchestrator | 2026-03-07 01:06:04 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:04.559474 | orchestrator | 2026-03-07 01:06:04 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:04.560713 | orchestrator | 2026-03-07 01:06:04 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:04.560742 | orchestrator | 2026-03-07 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:07.598914 | orchestrator | 2026-03-07 01:06:07 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:07.601153 | orchestrator | 2026-03-07 01:06:07 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:07.603087 | orchestrator | 2026-03-07 01:06:07 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:07.604948 | orchestrator | 2026-03-07 01:06:07 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:07.605011 | orchestrator | 2026-03-07 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:10.652615 | orchestrator | 2026-03-07 01:06:10 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:10.656336 | orchestrator | 2026-03-07 01:06:10 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:10.661302 | orchestrator | 2026-03-07 01:06:10 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:10.665188 | orchestrator | 2026-03-07 01:06:10 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:10.665247 | orchestrator | 2026-03-07 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:13.720247 | orchestrator | 2026-03-07 01:06:13 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:13.721484 | orchestrator | 2026-03-07 01:06:13 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:13.723142 | orchestrator | 2026-03-07 01:06:13 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:13.724094 | orchestrator | 2026-03-07 01:06:13 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:13.724116 | orchestrator | 2026-03-07 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:16.776924 | orchestrator | 2026-03-07 01:06:16 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:16.777003 | orchestrator | 2026-03-07 01:06:16 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:16.778881 | orchestrator | 2026-03-07 01:06:16 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:16.780555 | orchestrator | 2026-03-07 01:06:16 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:16.780726 | orchestrator | 2026-03-07 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:19.814810 | orchestrator | 2026-03-07 01:06:19 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:19.815291 | orchestrator | 2026-03-07 01:06:19 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:19.816169 | orchestrator | 2026-03-07 01:06:19 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:19.817346 | orchestrator | 2026-03-07 01:06:19 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:19.817495 | orchestrator | 2026-03-07 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:22.855840 | orchestrator | 2026-03-07 01:06:22 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:22.856257 | orchestrator | 2026-03-07 01:06:22 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:22.857372 | orchestrator | 2026-03-07 01:06:22 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:22.857862 | orchestrator | 2026-03-07 01:06:22 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:22.858065 | orchestrator | 2026-03-07 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:25.887438 | orchestrator | 2026-03-07 01:06:25 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:25.887753 | orchestrator | 2026-03-07 01:06:25 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:25.889545 | orchestrator | 2026-03-07 01:06:25 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:25.891470 | orchestrator | 2026-03-07 01:06:25 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:25.891523 | orchestrator | 2026-03-07 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:28.941227 | orchestrator | 2026-03-07 01:06:28 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:28.947119 | orchestrator | 2026-03-07 01:06:28 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:28.947224 | orchestrator | 2026-03-07 01:06:28 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:28.949178 | orchestrator | 2026-03-07 01:06:28 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:28.949234 | orchestrator | 2026-03-07 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:32.061909 | orchestrator | 2026-03-07 01:06:32 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:32.065838 | orchestrator | 2026-03-07 01:06:32 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:32.067848 | orchestrator | 2026-03-07 01:06:32 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:32.069154 | orchestrator | 2026-03-07 01:06:32 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:32.069220 | orchestrator | 2026-03-07 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:35.132991 | orchestrator | 2026-03-07 01:06:35 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:35.133447 | orchestrator | 2026-03-07 01:06:35 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:35.134444 | orchestrator | 2026-03-07 01:06:35 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:35.136161 | orchestrator | 2026-03-07 01:06:35 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:35.136214 | orchestrator | 2026-03-07 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:38.177528 | orchestrator | 2026-03-07 01:06:38 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:38.178434 | orchestrator | 2026-03-07 01:06:38 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:38.180928 | orchestrator | 2026-03-07 01:06:38 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:38.180995 | orchestrator | 2026-03-07 01:06:38 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:38.181047 | orchestrator | 2026-03-07 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:41.229234 | orchestrator | 2026-03-07 01:06:41 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:41.229796 | orchestrator | 2026-03-07 01:06:41 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:41.231321 | orchestrator | 2026-03-07 01:06:41 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:41.233010 | orchestrator | 2026-03-07 01:06:41 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:41.233497 | orchestrator | 2026-03-07 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:44.261190 | orchestrator | 2026-03-07 01:06:44 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:44.261581 | orchestrator | 2026-03-07 01:06:44 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:44.262298 | orchestrator | 2026-03-07 01:06:44 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:44.263008 | orchestrator | 2026-03-07 01:06:44 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:44.264759 | orchestrator | 2026-03-07 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:47.299560 | orchestrator | 2026-03-07 01:06:47 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:47.300668 | orchestrator | 2026-03-07 01:06:47 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:47.302357 | orchestrator | 2026-03-07 01:06:47 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:47.303473 | orchestrator | 2026-03-07 01:06:47 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:47.303521 | orchestrator | 2026-03-07 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:50.355538 | orchestrator | 2026-03-07 01:06:50 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:50.357522 | orchestrator | 2026-03-07 01:06:50 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:50.361037 | orchestrator | 2026-03-07 01:06:50 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:50.362519 | orchestrator | 2026-03-07 01:06:50 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:50.362583 | orchestrator | 2026-03-07 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:53.407062 | orchestrator | 2026-03-07 01:06:53 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:53.407816 | orchestrator | 2026-03-07 01:06:53 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:53.408712 | orchestrator | 2026-03-07 01:06:53 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:53.410898 | orchestrator | 2026-03-07 01:06:53 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:53.410924 | orchestrator | 2026-03-07 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:56.446238 | orchestrator | 2026-03-07 01:06:56 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:56.446421 | orchestrator | 2026-03-07 01:06:56 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:56.448601 | orchestrator | 2026-03-07 01:06:56 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:56.448643 | orchestrator | 2026-03-07 01:06:56 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:56.448651 | orchestrator | 2026-03-07 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:59.485805 | orchestrator | 2026-03-07 01:06:59 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:06:59.486569 | orchestrator | 2026-03-07 01:06:59 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:06:59.488475 | orchestrator | 2026-03-07 01:06:59 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:06:59.489445 | orchestrator | 2026-03-07 01:06:59 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:06:59.489499 | orchestrator | 2026-03-07 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:02.529078 | orchestrator | 2026-03-07 01:07:02 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:02.532384 | orchestrator | 2026-03-07 01:07:02 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:02.534253 | orchestrator | 2026-03-07 01:07:02 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:07:02.535700 | orchestrator | 2026-03-07 01:07:02 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:02.535724 | orchestrator | 2026-03-07 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:05.585918 | orchestrator | 2026-03-07 01:07:05 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:05.586932 | orchestrator | 2026-03-07 01:07:05 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:05.588681 | orchestrator | 2026-03-07 01:07:05 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:07:05.589764 | orchestrator | 2026-03-07 01:07:05 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:05.589791 | orchestrator | 2026-03-07 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:08.669544 | orchestrator | 2026-03-07 01:07:08 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:08.670833 | orchestrator | 2026-03-07 01:07:08 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:08.671466 | orchestrator | 2026-03-07 01:07:08 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:07:08.672541 | orchestrator | 2026-03-07 01:07:08 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:08.673912 | orchestrator | 2026-03-07 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:11.721452 | orchestrator | 2026-03-07 01:07:11 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:11.726566 | orchestrator | 2026-03-07 01:07:11 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:11.726699 | orchestrator | 2026-03-07 01:07:11 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:07:11.730703 | orchestrator | 2026-03-07 01:07:11 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:11.730761 | orchestrator | 2026-03-07 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:14.769543 | orchestrator | 2026-03-07 01:07:14 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:14.771829 | orchestrator | 2026-03-07 01:07:14 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:14.773152 | orchestrator | 2026-03-07 01:07:14 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:07:14.774739 | orchestrator | 2026-03-07 01:07:14 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:14.775009 | orchestrator | 2026-03-07 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:17.818172 | orchestrator | 2026-03-07 01:07:17 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:17.821212 | orchestrator | 2026-03-07 01:07:17 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:17.823391 | orchestrator | 2026-03-07 01:07:17 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:07:17.826450 | orchestrator | 2026-03-07 01:07:17 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:17.826510 | orchestrator | 2026-03-07 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:20.887295 | orchestrator | 2026-03-07 01:07:20 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:20.891427 | orchestrator | 2026-03-07 01:07:20 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:20.896044 | orchestrator | 2026-03-07 01:07:20 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:07:20.898155 | orchestrator | 2026-03-07 01:07:20 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:20.898190 | orchestrator | 2026-03-07 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:23.938449 | orchestrator | 2026-03-07 01:07:23 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:23.942743 | orchestrator | 2026-03-07 01:07:23 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:23.946497 | orchestrator | 2026-03-07 01:07:23 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state STARTED 2026-03-07 01:07:23.948046 | orchestrator | 2026-03-07 01:07:23 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:23.948124 | orchestrator | 2026-03-07 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:26.990587 | orchestrator | 2026-03-07 01:07:26 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:26.991126 | orchestrator | 2026-03-07 01:07:26 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:26.994329 | orchestrator | 2026-03-07 01:07:26 | INFO  | Task 6eb4d691-2def-419f-aeb8-bfec41046f2b is in state SUCCESS 2026-03-07 01:07:26.996531 | orchestrator | 2026-03-07 01:07:26.996607 | orchestrator | 2026-03-07 01:07:26.996682 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:07:26.996826 | orchestrator | 2026-03-07 01:07:26.996836 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:07:26.996870 | orchestrator | Saturday 07 March 2026 01:03:18 +0000 (0:00:00.805) 0:00:00.805 ******** 2026-03-07 01:07:26.996883 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:07:26.996898 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:07:26.996907 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:07:26.996915 | orchestrator | 2026-03-07 01:07:26.996924 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:07:26.996933 | orchestrator | Saturday 07 March 2026 01:03:19 +0000 (0:00:01.354) 0:00:02.159 ******** 2026-03-07 01:07:26.996942 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-07 01:07:26.996951 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-07 01:07:26.996961 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-07 01:07:26.996970 | orchestrator | 2026-03-07 01:07:26.996979 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-07 01:07:26.997057 | orchestrator | 2026-03-07 01:07:26.997073 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-07 01:07:26.997087 | orchestrator | Saturday 07 March 2026 01:03:20 +0000 (0:00:00.664) 0:00:02.824 ******** 2026-03-07 01:07:26.997120 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:07:26.997133 | orchestrator | 2026-03-07 01:07:26.997205 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-07 01:07:26.997222 | orchestrator | Saturday 07 March 2026 01:03:21 +0000 (0:00:00.971) 0:00:03.795 ******** 2026-03-07 01:07:26.997233 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-07 01:07:26.997243 | orchestrator | 2026-03-07 01:07:26.997253 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-07 01:07:26.997264 | orchestrator | Saturday 07 March 2026 01:03:25 +0000 (0:00:04.382) 0:00:08.178 ******** 2026-03-07 01:07:26.997333 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-07 01:07:26.997347 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-07 01:07:26.997361 | orchestrator | 2026-03-07 01:07:26.997372 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-07 01:07:26.997383 | orchestrator | Saturday 07 March 2026 01:03:32 +0000 (0:00:06.498) 0:00:14.677 ******** 2026-03-07 01:07:26.997393 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:07:26.997404 | orchestrator | 2026-03-07 01:07:26.997414 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-07 01:07:26.997424 | orchestrator | Saturday 07 March 2026 01:03:35 +0000 (0:00:03.034) 0:00:17.711 ******** 2026-03-07 01:07:26.997434 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-07 01:07:26.997445 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:07:26.997454 | orchestrator | 2026-03-07 01:07:26.997462 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-07 01:07:26.997471 | orchestrator | Saturday 07 March 2026 01:03:39 +0000 (0:00:03.791) 0:00:21.503 ******** 2026-03-07 01:07:26.997480 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:07:26.997488 | orchestrator | 2026-03-07 01:07:26.997497 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-07 01:07:26.997506 | orchestrator | Saturday 07 March 2026 01:03:43 +0000 (0:00:03.997) 0:00:25.501 ******** 2026-03-07 01:07:26.997514 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-07 01:07:26.997523 | orchestrator | 2026-03-07 01:07:26.997532 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-07 01:07:26.997540 | orchestrator | Saturday 07 March 2026 01:03:47 +0000 (0:00:04.323) 0:00:29.824 ******** 2026-03-07 01:07:26.997554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:26.997679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:26.997822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:26.997838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.997999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998065 | orchestrator | 2026-03-07 01:07:26.998093 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-07 01:07:26.998122 | orchestrator | Saturday 07 March 2026 01:03:52 +0000 (0:00:05.048) 0:00:34.873 ******** 2026-03-07 01:07:26.998132 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:26.998140 | orchestrator | 2026-03-07 01:07:26.998162 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-07 01:07:26.998171 | orchestrator | Saturday 07 March 2026 01:03:52 +0000 (0:00:00.314) 0:00:35.187 ******** 2026-03-07 01:07:26.998179 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:26.998188 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:26.998197 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:26.998205 | orchestrator | 2026-03-07 01:07:26.998214 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-07 01:07:26.998223 | orchestrator | Saturday 07 March 2026 01:03:54 +0000 (0:00:02.088) 0:00:37.276 ******** 2026-03-07 01:07:26.998231 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:07:26.998240 | orchestrator | 2026-03-07 01:07:26.998249 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-07 01:07:26.998265 | orchestrator | Saturday 07 March 2026 01:03:57 +0000 (0:00:02.953) 0:00:40.230 ******** 2026-03-07 01:07:26.998274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:26.998292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:26.998306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:26.998315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.998493 | orchestrator | 2026-03-07 01:07:26.998502 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-07 01:07:26.998511 | orchestrator | Saturday 07 March 2026 01:04:06 +0000 (0:00:08.987) 0:00:49.218 ******** 2026-03-07 01:07:26.998520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:26.998529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:07:26.998545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:26.998559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.998568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:07:26.998577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.998591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.998600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999259 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:26.999273 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:26.999285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:26.999298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:07:26.999329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999389 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:26.999401 | orchestrator | 2026-03-07 01:07:26.999413 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-07 01:07:26.999425 | orchestrator | Saturday 07 March 2026 01:04:09 +0000 (0:00:02.974) 0:00:52.192 ******** 2026-03-07 01:07:26.999437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:26.999449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:07:26.999467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999527 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:26.999539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:26.999550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:07:26.999567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999626 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:26.999643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:26.999663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:07:26.999683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:07:26.999797 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:26.999808 | orchestrator | 2026-03-07 01:07:26.999820 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-07 01:07:26.999832 | orchestrator | Saturday 07 March 2026 01:04:13 +0000 (0:00:03.651) 0:00:55.844 ******** 2026-03-07 01:07:26.999844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:26.999856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:26.999876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:26.999888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:26.999915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:26.999954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:26.999966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.999978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:26.999996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000189 | orchestrator | 2026-03-07 01:07:27.000201 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-07 01:07:27.000213 | orchestrator | Saturday 07 March 2026 01:04:21 +0000 (0:00:08.044) 0:01:03.888 ******** 2026-03-07 01:07:27.000225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:27.000237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:27.000249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:27.000267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000491 | orchestrator | 2026-03-07 01:07:27.000503 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-07 01:07:27.000515 | orchestrator | Saturday 07 March 2026 01:04:54 +0000 (0:00:32.650) 0:01:36.538 ******** 2026-03-07 01:07:27.000526 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-07 01:07:27.000538 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-07 01:07:27.000554 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-07 01:07:27.000566 | orchestrator | 2026-03-07 01:07:27.000577 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-07 01:07:27.000588 | orchestrator | Saturday 07 March 2026 01:05:03 +0000 (0:00:09.514) 0:01:46.052 ******** 2026-03-07 01:07:27.000600 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-07 01:07:27.000611 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-07 01:07:27.000622 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-07 01:07:27.000633 | orchestrator | 2026-03-07 01:07:27.000645 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-07 01:07:27.000663 | orchestrator | Saturday 07 March 2026 01:05:10 +0000 (0:00:06.764) 0:01:52.817 ******** 2026-03-07 01:07:27.000684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:27.000707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:27.000750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:27.000769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.000802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.000815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.000827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.000867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.000885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.000897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.000915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.000935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.000965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.001038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.001068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.001089 | orchestrator | 2026-03-07 01:07:27.001152 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-07 01:07:27.001179 | orchestrator | Saturday 07 March 2026 01:05:13 +0000 (0:00:03.383) 0:01:56.201 ******** 2026-03-07 01:07:27.001204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:27.001224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:27.001259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:27.001291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.001323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.001342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.001509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.001685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.001712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.001731 | orchestrator | 2026-03-07 01:07:27.001750 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-07 01:07:27.001769 | orchestrator | Saturday 07 March 2026 01:05:18 +0000 (0:00:04.290) 0:02:00.491 ******** 2026-03-07 01:07:27.001789 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:27.001809 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:27.001828 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:27.001844 | orchestrator | 2026-03-07 01:07:27.001855 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-07 01:07:27.001874 | orchestrator | Saturday 07 March 2026 01:05:19 +0000 (0:00:01.060) 0:02:01.552 ******** 2026-03-07 01:07:27.001887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:27.001914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:07:27.001933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.001986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.002081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.002196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:27.002225 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:27.002262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:07:27.002281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.002300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.002327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.002338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.002348 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:27.002365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:07:27.002387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:07:27.002398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.002408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.002418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.002435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:07:27.002445 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:27.002455 | orchestrator | 2026-03-07 01:07:27.002465 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-07 01:07:27.002476 | orchestrator | Saturday 07 March 2026 01:05:21 +0000 (0:00:02.007) 0:02:03.560 ******** 2026-03-07 01:07:27.002492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:27.002511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:27.002521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:07:27.002532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:07:27.002732 | orchestrator | 2026-03-07 01:07:27.002742 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-07 01:07:27.002752 | orchestrator | Saturday 07 March 2026 01:05:26 +0000 (0:00:05.335) 0:02:08.895 ******** 2026-03-07 01:07:27.002762 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:27.002772 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:27.002782 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:27.002793 | orchestrator | 2026-03-07 01:07:27.002802 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-07 01:07:27.002812 | orchestrator | Saturday 07 March 2026 01:05:26 +0000 (0:00:00.451) 0:02:09.347 ******** 2026-03-07 01:07:27.002830 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-07 01:07:27.002841 | orchestrator | 2026-03-07 01:07:27.002852 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-07 01:07:27.002862 | orchestrator | Saturday 07 March 2026 01:05:29 +0000 (0:00:02.780) 0:02:12.127 ******** 2026-03-07 01:07:27.002871 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 01:07:27.002881 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-07 01:07:27.002891 | orchestrator | 2026-03-07 01:07:27.002901 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-07 01:07:27.002911 | orchestrator | Saturday 07 March 2026 01:05:32 +0000 (0:00:02.814) 0:02:14.942 ******** 2026-03-07 01:07:27.002926 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:27.002936 | orchestrator | 2026-03-07 01:07:27.002946 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-07 01:07:27.002956 | orchestrator | Saturday 07 March 2026 01:05:52 +0000 (0:00:20.442) 0:02:35.385 ******** 2026-03-07 01:07:27.002966 | orchestrator | 2026-03-07 01:07:27.002976 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-07 01:07:27.002986 | orchestrator | Saturday 07 March 2026 01:05:53 +0000 (0:00:00.175) 0:02:35.560 ******** 2026-03-07 01:07:27.002996 | orchestrator | 2026-03-07 01:07:27.003006 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-07 01:07:27.003016 | orchestrator | Saturday 07 March 2026 01:05:53 +0000 (0:00:00.177) 0:02:35.738 ******** 2026-03-07 01:07:27.003026 | orchestrator | 2026-03-07 01:07:27.003037 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-07 01:07:27.003046 | orchestrator | Saturday 07 March 2026 01:05:53 +0000 (0:00:00.234) 0:02:35.973 ******** 2026-03-07 01:07:27.003056 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:27.003066 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:27.003076 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:27.003086 | orchestrator | 2026-03-07 01:07:27.003096 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-07 01:07:27.003144 | orchestrator | Saturday 07 March 2026 01:06:10 +0000 (0:00:16.842) 0:02:52.815 ******** 2026-03-07 01:07:27.003158 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:27.003169 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:27.003178 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:27.003188 | orchestrator | 2026-03-07 01:07:27.003198 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-07 01:07:27.003208 | orchestrator | Saturday 07 March 2026 01:06:27 +0000 (0:00:16.905) 0:03:09.721 ******** 2026-03-07 01:07:27.003218 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:27.003229 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:27.003238 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:27.003251 | orchestrator | 2026-03-07 01:07:27.003267 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-07 01:07:27.003282 | orchestrator | Saturday 07 March 2026 01:06:42 +0000 (0:00:14.962) 0:03:24.684 ******** 2026-03-07 01:07:27.003299 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:27.003314 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:27.003331 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:27.003349 | orchestrator | 2026-03-07 01:07:27.003367 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-07 01:07:27.003383 | orchestrator | Saturday 07 March 2026 01:06:55 +0000 (0:00:13.501) 0:03:38.185 ******** 2026-03-07 01:07:27.003400 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:27.003411 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:27.003421 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:27.003431 | orchestrator | 2026-03-07 01:07:27.003441 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-07 01:07:27.003451 | orchestrator | Saturday 07 March 2026 01:07:03 +0000 (0:00:07.633) 0:03:45.818 ******** 2026-03-07 01:07:27.003471 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:27.003481 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:27.003491 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:27.003501 | orchestrator | 2026-03-07 01:07:27.003517 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-07 01:07:27.003541 | orchestrator | Saturday 07 March 2026 01:07:18 +0000 (0:00:15.084) 0:04:00.903 ******** 2026-03-07 01:07:27.003563 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:27.003579 | orchestrator | 2026-03-07 01:07:27.003595 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:07:27.003612 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:07:27.003627 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:07:27.003642 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:07:27.003660 | orchestrator | 2026-03-07 01:07:27.003676 | orchestrator | 2026-03-07 01:07:27.003705 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:07:27.003721 | orchestrator | Saturday 07 March 2026 01:07:25 +0000 (0:00:07.374) 0:04:08.277 ******** 2026-03-07 01:07:27.003737 | orchestrator | =============================================================================== 2026-03-07 01:07:27.003754 | orchestrator | designate : Copying over designate.conf -------------------------------- 32.65s 2026-03-07 01:07:27.003770 | orchestrator | designate : Running Designate bootstrap container ---------------------- 20.44s 2026-03-07 01:07:27.003788 | orchestrator | designate : Restart designate-api container ---------------------------- 16.91s 2026-03-07 01:07:27.003805 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 16.84s 2026-03-07 01:07:27.003822 | orchestrator | designate : Restart designate-worker container ------------------------- 15.08s 2026-03-07 01:07:27.003838 | orchestrator | designate : Restart designate-central container ------------------------ 14.96s 2026-03-07 01:07:27.003849 | orchestrator | designate : Restart designate-producer container ----------------------- 13.50s 2026-03-07 01:07:27.003858 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 9.51s 2026-03-07 01:07:27.003868 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 8.99s 2026-03-07 01:07:27.003878 | orchestrator | designate : Copying over config.json files for services ----------------- 8.04s 2026-03-07 01:07:27.003888 | orchestrator | designate : Restart designate-mdns container ---------------------------- 7.63s 2026-03-07 01:07:27.003907 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.37s 2026-03-07 01:07:27.003919 | orchestrator | designate : Copying over named.conf ------------------------------------- 6.77s 2026-03-07 01:07:27.003929 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.50s 2026-03-07 01:07:27.003939 | orchestrator | designate : Check designate containers ---------------------------------- 5.34s 2026-03-07 01:07:27.003949 | orchestrator | designate : Ensuring config directories exist --------------------------- 5.05s 2026-03-07 01:07:27.003959 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.38s 2026-03-07 01:07:27.003969 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.32s 2026-03-07 01:07:27.003979 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.29s 2026-03-07 01:07:27.003988 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.00s 2026-03-07 01:07:27.003999 | orchestrator | 2026-03-07 01:07:26 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:27.004009 | orchestrator | 2026-03-07 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:30.057164 | orchestrator | 2026-03-07 01:07:30 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:07:30.058655 | orchestrator | 2026-03-07 01:07:30 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:30.061089 | orchestrator | 2026-03-07 01:07:30 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:30.063140 | orchestrator | 2026-03-07 01:07:30 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:30.063240 | orchestrator | 2026-03-07 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:33.109890 | orchestrator | 2026-03-07 01:07:33 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:07:33.110426 | orchestrator | 2026-03-07 01:07:33 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:33.111936 | orchestrator | 2026-03-07 01:07:33 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:33.113097 | orchestrator | 2026-03-07 01:07:33 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:33.113493 | orchestrator | 2026-03-07 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:36.173556 | orchestrator | 2026-03-07 01:07:36 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:07:36.176477 | orchestrator | 2026-03-07 01:07:36 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:36.179295 | orchestrator | 2026-03-07 01:07:36 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:36.181075 | orchestrator | 2026-03-07 01:07:36 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:36.181130 | orchestrator | 2026-03-07 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:39.230541 | orchestrator | 2026-03-07 01:07:39 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:07:39.234208 | orchestrator | 2026-03-07 01:07:39 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:39.235041 | orchestrator | 2026-03-07 01:07:39 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:39.237047 | orchestrator | 2026-03-07 01:07:39 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:39.237090 | orchestrator | 2026-03-07 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:42.288473 | orchestrator | 2026-03-07 01:07:42 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:07:42.291463 | orchestrator | 2026-03-07 01:07:42 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:42.292388 | orchestrator | 2026-03-07 01:07:42 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:42.293734 | orchestrator | 2026-03-07 01:07:42 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:42.293786 | orchestrator | 2026-03-07 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:45.336977 | orchestrator | 2026-03-07 01:07:45 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:07:45.338713 | orchestrator | 2026-03-07 01:07:45 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:45.340015 | orchestrator | 2026-03-07 01:07:45 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:45.341286 | orchestrator | 2026-03-07 01:07:45 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:45.341346 | orchestrator | 2026-03-07 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:48.373106 | orchestrator | 2026-03-07 01:07:48 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:07:48.376063 | orchestrator | 2026-03-07 01:07:48 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:48.377836 | orchestrator | 2026-03-07 01:07:48 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:48.380833 | orchestrator | 2026-03-07 01:07:48 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:48.380917 | orchestrator | 2026-03-07 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:51.428590 | orchestrator | 2026-03-07 01:07:51 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:07:51.429215 | orchestrator | 2026-03-07 01:07:51 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:51.431235 | orchestrator | 2026-03-07 01:07:51 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:51.432587 | orchestrator | 2026-03-07 01:07:51 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state STARTED 2026-03-07 01:07:51.432647 | orchestrator | 2026-03-07 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:54.479667 | orchestrator | 2026-03-07 01:07:54 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:07:54.480619 | orchestrator | 2026-03-07 01:07:54 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:54.482211 | orchestrator | 2026-03-07 01:07:54 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:54.487868 | orchestrator | 2026-03-07 01:07:54 | INFO  | Task 3aba6176-b7f1-4522-8b49-ff5587839d91 is in state SUCCESS 2026-03-07 01:07:54.489061 | orchestrator | 2026-03-07 01:07:54.489098 | orchestrator | 2026-03-07 01:07:54.489105 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:07:54.489129 | orchestrator | 2026-03-07 01:07:54.489135 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:07:54.489141 | orchestrator | Saturday 07 March 2026 01:03:05 +0000 (0:00:00.364) 0:00:00.364 ******** 2026-03-07 01:07:54.489147 | orchestrator | ok: [testbed-manager] 2026-03-07 01:07:54.489154 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:07:54.489159 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:07:54.489165 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:07:54.489170 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:07:54.489175 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:07:54.489180 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:07:54.489185 | orchestrator | 2026-03-07 01:07:54.489191 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:07:54.489197 | orchestrator | Saturday 07 March 2026 01:03:07 +0000 (0:00:01.260) 0:00:01.625 ******** 2026-03-07 01:07:54.489202 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-07 01:07:54.489208 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-07 01:07:54.489214 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-07 01:07:54.489218 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-07 01:07:54.489223 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-07 01:07:54.489229 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-07 01:07:54.489234 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-07 01:07:54.489240 | orchestrator | 2026-03-07 01:07:54.489244 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-07 01:07:54.489270 | orchestrator | 2026-03-07 01:07:54.489275 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-07 01:07:54.489279 | orchestrator | Saturday 07 March 2026 01:03:08 +0000 (0:00:01.017) 0:00:02.642 ******** 2026-03-07 01:07:54.489285 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:07:54.489291 | orchestrator | 2026-03-07 01:07:54.489296 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-07 01:07:54.489300 | orchestrator | Saturday 07 March 2026 01:03:10 +0000 (0:00:02.193) 0:00:04.835 ******** 2026-03-07 01:07:54.489307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489330 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489336 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 01:07:54.489353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489381 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489409 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489418 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489431 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489445 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489467 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 01:07:54.489476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489505 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489509 | orchestrator | 2026-03-07 01:07:54.489514 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-07 01:07:54.489518 | orchestrator | Saturday 07 March 2026 01:03:14 +0000 (0:00:04.594) 0:00:09.430 ******** 2026-03-07 01:07:54.489523 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:07:54.489527 | orchestrator | 2026-03-07 01:07:54.489532 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-07 01:07:54.489536 | orchestrator | Saturday 07 March 2026 01:03:16 +0000 (0:00:01.769) 0:00:11.200 ******** 2026-03-07 01:07:54.489541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489553 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 01:07:54.489557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489599 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.489604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489625 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489637 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489652 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489679 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 01:07:54.489684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.489698 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.489746 | orchestrator | 2026-03-07 01:07:54.489750 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-07 01:07:54.489755 | orchestrator | Saturday 07 March 2026 01:03:24 +0000 (0:00:07.830) 0:00:19.031 ******** 2026-03-07 01:07:54.489760 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-07 01:07:54.489764 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.489772 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.489777 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-07 01:07:54.489789 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.489794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.489799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.489803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.489808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.489815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.489820 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:54.489828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.489832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.489840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.489844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.489849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.489853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.489858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.489865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.489873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.489877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.489882 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:54.489886 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:54.489896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.489901 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:54.489906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.489910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.489915 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.489919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.489927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.489934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.489939 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.489943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.489948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.489955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.489959 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.489964 | orchestrator | 2026-03-07 01:07:54.489968 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-07 01:07:54.489973 | orchestrator | Saturday 07 March 2026 01:03:26 +0000 (0:00:02.060) 0:00:21.092 ******** 2026-03-07 01:07:54.489977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.489982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.489986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.489996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.490004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.490011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.490073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.490082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.490089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.490096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.490131 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-07 01:07:54.490139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.490147 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:54.490153 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:54.490161 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.490172 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-07 01:07:54.490181 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.490188 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:54.490195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.490211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.490222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.490231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.490238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:54.490246 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:54.490259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.490267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.490275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.490282 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.490295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.490306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.490313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.490319 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.490326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:54.490332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.490345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:54.490352 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.490358 | orchestrator | 2026-03-07 01:07:54.490364 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-07 01:07:54.490371 | orchestrator | Saturday 07 March 2026 01:03:28 +0000 (0:00:02.333) 0:00:23.425 ******** 2026-03-07 01:07:54.490378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.490389 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 01:07:54.490399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.490406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.490413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.490420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.490431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.490439 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.490450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.490458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.490466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.490478 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.490485 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.490492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.490505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.490513 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.490526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.490533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.490545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.490553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.490561 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.490568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.490581 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 01:07:54.490595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.490602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.490609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.490614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.490618 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.490623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.490627 | orchestrator | 2026-03-07 01:07:54.490632 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-07 01:07:54.490636 | orchestrator | Saturday 07 March 2026 01:03:35 +0000 (0:00:06.643) 0:00:30.069 ******** 2026-03-07 01:07:54.490645 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 01:07:54.490650 | orchestrator | 2026-03-07 01:07:54.490654 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-07 01:07:54.490662 | orchestrator | Saturday 07 March 2026 01:03:37 +0000 (0:00:01.612) 0:00:31.681 ******** 2026-03-07 01:07:54.490667 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094421, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6370022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490673 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094421, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6370022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490677 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094421, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6370022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490686 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1094462, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6417756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490691 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094421, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6370022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490696 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1094462, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6417756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490703 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1094412, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490711 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094421, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6370022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.490716 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094421, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6370022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490720 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1094462, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6417756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490730 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094421, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6370022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490734 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1094462, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6417756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490739 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094451, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6403005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490751 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1094412, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490756 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1094412, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490761 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1094462, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6417756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490765 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1094462, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6417756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490773 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1094412, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490778 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094451, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6403005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490783 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094409, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6350172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490793 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094451, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6403005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490798 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1094462, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6417756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.490803 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1094412, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490807 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094409, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6350172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490815 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1094412, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490820 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094425, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490824 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094451, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6403005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490835 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094409, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6350172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490843 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094425, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490848 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094451, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6403005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490852 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1094444, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6398418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490860 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094451, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6403005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490868 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094425, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490880 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094409, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6350172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490888 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094409, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6350172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490899 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1094412, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.490906 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1094444, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6398418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490914 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094409, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6350172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490924 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094432, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490932 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094425, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490943 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094425, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490949 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1094444, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6398418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490962 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094425, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490970 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094432, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490978 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094432, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490986 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1094444, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6398418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.490998 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094419, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491010 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1094444, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6398418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491018 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1094444, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6398418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491031 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094419, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491038 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094432, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491046 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094432, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491053 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094451, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6403005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491064 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094432, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491076 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094419, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491084 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094419, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491097 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094459, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.641171, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491105 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094419, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491194 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094419, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491202 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094459, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.641171, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491218 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094459, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.641171, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491223 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094459, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.641171, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491228 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094459, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.641171, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491242 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6340973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491250 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094459, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.641171, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491257 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6340973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491265 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094472, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6429884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491282 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094409, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6350172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491290 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6340973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491296 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6340973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491309 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6340973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491316 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6340973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491323 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094472, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6429884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491331 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094457, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6409109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491349 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094472, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6429884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491357 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094472, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6429884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491364 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094457, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6409109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491377 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094457, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6409109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491385 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094472, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6429884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491393 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094457, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6409109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491409 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094472, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6429884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491421 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094411, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6352365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491428 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094425, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491435 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094457, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6409109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491445 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094411, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6352365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491450 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094457, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6409109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491454 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094411, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6352365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491462 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094411, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6352365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491470 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094411, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6352365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491475 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1094407, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.634598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491483 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1094407, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.634598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491496 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094441, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6391363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491503 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094441, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6391363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491510 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1094407, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.634598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491522 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094411, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6352365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491533 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1094407, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.634598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491541 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1094407, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.634598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491548 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094436, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6385617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491559 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094436, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6385617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491567 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1094407, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.634598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491574 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094471, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.642547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491587 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:54.491597 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1094444, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6398418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491608 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094441, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6391363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491615 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094441, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6391363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491622 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094441, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6391363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491634 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094441, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6391363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491641 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094471, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.642547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491653 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:54.491661 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094436, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6385617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491669 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094436, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6385617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491680 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094436, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6385617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491687 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094436, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6385617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491694 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094432, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6380954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491708 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094471, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.642547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491715 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.491722 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094471, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.642547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491737 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.491745 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094471, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.642547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491751 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.491759 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094471, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.642547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:54.491766 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:54.491779 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094419, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.636359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491786 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094459, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.641171, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491793 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6340973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491806 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094472, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6429884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491819 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094457, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6409109, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491827 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094411, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6352365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491834 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1094407, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.634598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491845 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094441, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6391363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491853 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094436, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6385617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491859 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094471, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.642547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:54.491866 | orchestrator | 2026-03-07 01:07:54.491874 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-07 01:07:54.491882 | orchestrator | Saturday 07 March 2026 01:04:21 +0000 (0:00:44.753) 0:01:16.434 ******** 2026-03-07 01:07:54.491894 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 01:07:54.491899 | orchestrator | 2026-03-07 01:07:54.491907 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-07 01:07:54.491912 | orchestrator | Saturday 07 March 2026 01:04:24 +0000 (0:00:02.081) 0:01:18.517 ******** 2026-03-07 01:07:54.491919 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:54.491926 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.491933 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:54.491940 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.491947 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-07 01:07:54.491954 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 01:07:54.491960 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:54.491967 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.491975 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:54.491982 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.491989 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-07 01:07:54.491996 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:07:54.492003 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:54.492009 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.492016 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:54.492022 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.492029 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-07 01:07:54.492037 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-07 01:07:54.492043 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:54.492050 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.492056 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:54.492063 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.492070 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-07 01:07:54.492077 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-07 01:07:54.492084 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:54.492091 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.492098 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:54.492105 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.492128 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-07 01:07:54.492136 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 01:07:54.492142 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:54.492149 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.492157 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:54.492170 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.492177 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-07 01:07:54.492184 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-07 01:07:54.492190 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:54.492197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.492204 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:54.492212 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:54.492218 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-07 01:07:54.492225 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-07 01:07:54.492239 | orchestrator | 2026-03-07 01:07:54.492243 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-07 01:07:54.492247 | orchestrator | Saturday 07 March 2026 01:04:31 +0000 (0:00:06.964) 0:01:25.481 ******** 2026-03-07 01:07:54.492251 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:54.492256 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:54.492264 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:54.492268 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:54.492273 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:54.492277 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.492281 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:54.492285 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:54.492289 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:54.492293 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.492298 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:54.492302 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.492306 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-07 01:07:54.492310 | orchestrator | 2026-03-07 01:07:54.492315 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-07 01:07:54.492319 | orchestrator | Saturday 07 March 2026 01:05:14 +0000 (0:00:43.504) 0:02:08.986 ******** 2026-03-07 01:07:54.492323 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:54.492333 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:54.492338 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:54.492342 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:54.492346 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:54.492350 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.492355 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:54.492359 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:54.492364 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:54.492368 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.492372 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:54.492376 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.492380 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-07 01:07:54.492384 | orchestrator | 2026-03-07 01:07:54.492389 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-07 01:07:54.492393 | orchestrator | Saturday 07 March 2026 01:05:21 +0000 (0:00:07.374) 0:02:16.361 ******** 2026-03-07 01:07:54.492397 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:54.492403 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:54.492408 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:54.492412 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:54.492416 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:54.492426 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:54.492430 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-07 01:07:54.492435 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:54.492439 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.492443 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:54.492448 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.492452 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:54.492459 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.492464 | orchestrator | 2026-03-07 01:07:54.492468 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-07 01:07:54.492472 | orchestrator | Saturday 07 March 2026 01:05:25 +0000 (0:00:03.875) 0:02:20.236 ******** 2026-03-07 01:07:54.492477 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 01:07:54.492481 | orchestrator | 2026-03-07 01:07:54.492485 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-07 01:07:54.492489 | orchestrator | Saturday 07 March 2026 01:05:26 +0000 (0:00:01.191) 0:02:21.427 ******** 2026-03-07 01:07:54.492493 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:54.492497 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:54.492501 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:54.492506 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:54.492510 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.492514 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.492518 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.492522 | orchestrator | 2026-03-07 01:07:54.492527 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-07 01:07:54.492531 | orchestrator | Saturday 07 March 2026 01:05:28 +0000 (0:00:01.995) 0:02:23.423 ******** 2026-03-07 01:07:54.492535 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:54.492539 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.492543 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.492547 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.492552 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:54.492556 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:54.492560 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:54.492564 | orchestrator | 2026-03-07 01:07:54.492568 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-07 01:07:54.492573 | orchestrator | Saturday 07 March 2026 01:05:32 +0000 (0:00:03.780) 0:02:27.204 ******** 2026-03-07 01:07:54.492577 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:54.492581 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:54.492585 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:54.492590 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:54.492594 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:54.492598 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.492602 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:54.492607 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:54.492618 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates2026-03-07 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:54.492625 | orchestrator | /clouds.yml.j2)  2026-03-07 01:07:54.492635 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:54.492644 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:54.492657 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.492663 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:54.492669 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.492676 | orchestrator | 2026-03-07 01:07:54.492682 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-07 01:07:54.492689 | orchestrator | Saturday 07 March 2026 01:05:36 +0000 (0:00:03.552) 0:02:30.756 ******** 2026-03-07 01:07:54.492696 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:54.492703 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:54.492709 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:54.492714 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:54.492718 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:54.492722 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:54.492726 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:54.492730 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.492734 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:54.492738 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.492742 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:54.492747 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.492751 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-07 01:07:54.492755 | orchestrator | 2026-03-07 01:07:54.492759 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-07 01:07:54.492763 | orchestrator | Saturday 07 March 2026 01:05:39 +0000 (0:00:03.534) 0:02:34.291 ******** 2026-03-07 01:07:54.492767 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:54.492771 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-07 01:07:54.492776 | orchestrator | due to this access issue: 2026-03-07 01:07:54.492780 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-07 01:07:54.492784 | orchestrator | not a directory 2026-03-07 01:07:54.492788 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 01:07:54.492792 | orchestrator | 2026-03-07 01:07:54.492800 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-07 01:07:54.492804 | orchestrator | Saturday 07 March 2026 01:05:42 +0000 (0:00:02.381) 0:02:36.672 ******** 2026-03-07 01:07:54.492808 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:54.492812 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:54.492817 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:54.492821 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:54.492825 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.492829 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.492833 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.492837 | orchestrator | 2026-03-07 01:07:54.492841 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-07 01:07:54.492845 | orchestrator | Saturday 07 March 2026 01:05:43 +0000 (0:00:01.782) 0:02:38.455 ******** 2026-03-07 01:07:54.492849 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:54.492853 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:54.492857 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:54.492861 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:54.492865 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:54.492873 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:54.492877 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:54.492881 | orchestrator | 2026-03-07 01:07:54.492885 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-07 01:07:54.492889 | orchestrator | Saturday 07 March 2026 01:05:45 +0000 (0:00:01.303) 0:02:39.759 ******** 2026-03-07 01:07:54.492895 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 01:07:54.492906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.492912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.492916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.492922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.492929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.492933 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.492941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.492946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.492955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.492960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.492965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.492969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.492977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.492982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:54.492992 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.492998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.493006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.493010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.493015 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.493019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.493027 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.493035 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 01:07:54.493042 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.493050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.493054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.493059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.493063 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:54.493070 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:54.493077 | orchestrator | 2026-03-07 01:07:54.493082 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-07 01:07:54.493086 | orchestrator | Saturday 07 March 2026 01:05:52 +0000 (0:00:06.875) 0:02:46.634 ******** 2026-03-07 01:07:54.493090 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-07 01:07:54.493094 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:54.493099 | orchestrator | 2026-03-07 01:07:54.493103 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:54.493107 | orchestrator | Saturday 07 March 2026 01:05:55 +0000 (0:00:03.678) 0:02:50.313 ******** 2026-03-07 01:07:54.493131 | orchestrator | 2026-03-07 01:07:54.493139 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:54.493144 | orchestrator | Saturday 07 March 2026 01:05:56 +0000 (0:00:00.169) 0:02:50.482 ******** 2026-03-07 01:07:54.493148 | orchestrator | 2026-03-07 01:07:54.493152 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:54.493156 | orchestrator | Saturday 07 March 2026 01:05:56 +0000 (0:00:00.095) 0:02:50.578 ******** 2026-03-07 01:07:54.493160 | orchestrator | 2026-03-07 01:07:54.493164 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:54.493168 | orchestrator | Saturday 07 March 2026 01:05:56 +0000 (0:00:00.073) 0:02:50.651 ******** 2026-03-07 01:07:54.493172 | orchestrator | 2026-03-07 01:07:54.493176 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:54.493181 | orchestrator | Saturday 07 March 2026 01:05:56 +0000 (0:00:00.227) 0:02:50.879 ******** 2026-03-07 01:07:54.493185 | orchestrator | 2026-03-07 01:07:54.493189 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:54.493193 | orchestrator | Saturday 07 March 2026 01:05:56 +0000 (0:00:00.130) 0:02:51.009 ******** 2026-03-07 01:07:54.493197 | orchestrator | 2026-03-07 01:07:54.493201 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:54.493205 | orchestrator | Saturday 07 March 2026 01:05:56 +0000 (0:00:00.136) 0:02:51.145 ******** 2026-03-07 01:07:54.493209 | orchestrator | 2026-03-07 01:07:54.493213 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-07 01:07:54.493217 | orchestrator | Saturday 07 March 2026 01:05:56 +0000 (0:00:00.199) 0:02:51.345 ******** 2026-03-07 01:07:54.493221 | orchestrator | changed: [testbed-manager] 2026-03-07 01:07:54.493225 | orchestrator | 2026-03-07 01:07:54.493233 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-07 01:07:54.493237 | orchestrator | Saturday 07 March 2026 01:06:16 +0000 (0:00:20.016) 0:03:11.362 ******** 2026-03-07 01:07:54.493241 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:07:54.493245 | orchestrator | changed: [testbed-manager] 2026-03-07 01:07:54.493250 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:54.493254 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:54.493258 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:07:54.493262 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:54.493266 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:07:54.493270 | orchestrator | 2026-03-07 01:07:54.493274 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-07 01:07:54.493278 | orchestrator | Saturday 07 March 2026 01:06:36 +0000 (0:00:19.198) 0:03:30.561 ******** 2026-03-07 01:07:54.493283 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:54.493287 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:54.493291 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:54.493299 | orchestrator | 2026-03-07 01:07:54.493303 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-07 01:07:54.493307 | orchestrator | Saturday 07 March 2026 01:06:48 +0000 (0:00:12.648) 0:03:43.209 ******** 2026-03-07 01:07:54.493311 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:54.493315 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:54.493319 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:54.493323 | orchestrator | 2026-03-07 01:07:54.493327 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-07 01:07:54.493331 | orchestrator | Saturday 07 March 2026 01:07:01 +0000 (0:00:12.803) 0:03:56.012 ******** 2026-03-07 01:07:54.493335 | orchestrator | changed: [testbed-manager] 2026-03-07 01:07:54.493339 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:54.493343 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:07:54.493347 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:54.493351 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:07:54.493356 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:54.493360 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:07:54.493364 | orchestrator | 2026-03-07 01:07:54.493368 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-07 01:07:54.493372 | orchestrator | Saturday 07 March 2026 01:07:17 +0000 (0:00:16.450) 0:04:12.463 ******** 2026-03-07 01:07:54.493376 | orchestrator | changed: [testbed-manager] 2026-03-07 01:07:54.493380 | orchestrator | 2026-03-07 01:07:54.493384 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-07 01:07:54.493388 | orchestrator | Saturday 07 March 2026 01:07:26 +0000 (0:00:08.369) 0:04:20.832 ******** 2026-03-07 01:07:54.493393 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:54.493397 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:54.493401 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:54.493405 | orchestrator | 2026-03-07 01:07:54.493409 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-07 01:07:54.493413 | orchestrator | Saturday 07 March 2026 01:07:31 +0000 (0:00:05.619) 0:04:26.451 ******** 2026-03-07 01:07:54.493417 | orchestrator | changed: [testbed-manager] 2026-03-07 01:07:54.493421 | orchestrator | 2026-03-07 01:07:54.493425 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-07 01:07:54.493430 | orchestrator | Saturday 07 March 2026 01:07:42 +0000 (0:00:10.790) 0:04:37.242 ******** 2026-03-07 01:07:54.493437 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:07:54.493441 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:07:54.493445 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:07:54.493449 | orchestrator | 2026-03-07 01:07:54.493453 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:07:54.493457 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-07 01:07:54.493462 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-07 01:07:54.493466 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-07 01:07:54.493470 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-07 01:07:54.493475 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:07:54.493479 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:07:54.493483 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:07:54.493490 | orchestrator | 2026-03-07 01:07:54.493495 | orchestrator | 2026-03-07 01:07:54.493499 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:07:54.493503 | orchestrator | Saturday 07 March 2026 01:07:53 +0000 (0:00:11.018) 0:04:48.261 ******** 2026-03-07 01:07:54.493507 | orchestrator | =============================================================================== 2026-03-07 01:07:54.493511 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 44.75s 2026-03-07 01:07:54.493515 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 43.50s 2026-03-07 01:07:54.493519 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.02s 2026-03-07 01:07:54.493523 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 19.20s 2026-03-07 01:07:54.493530 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.45s 2026-03-07 01:07:54.493534 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.80s 2026-03-07 01:07:54.493538 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.65s 2026-03-07 01:07:54.493542 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.02s 2026-03-07 01:07:54.493546 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.79s 2026-03-07 01:07:54.493550 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.37s 2026-03-07 01:07:54.493555 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.83s 2026-03-07 01:07:54.493559 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 7.38s 2026-03-07 01:07:54.493563 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 6.96s 2026-03-07 01:07:54.493567 | orchestrator | prometheus : Check prometheus containers -------------------------------- 6.88s 2026-03-07 01:07:54.493571 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.64s 2026-03-07 01:07:54.493575 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.62s 2026-03-07 01:07:54.493579 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.59s 2026-03-07 01:07:54.493583 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.87s 2026-03-07 01:07:54.493587 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.78s 2026-03-07 01:07:54.493592 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 3.68s 2026-03-07 01:07:57.519592 | orchestrator | 2026-03-07 01:07:57 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:07:57.520756 | orchestrator | 2026-03-07 01:07:57 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:07:57.521409 | orchestrator | 2026-03-07 01:07:57 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:07:57.521841 | orchestrator | 2026-03-07 01:07:57 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:07:57.521857 | orchestrator | 2026-03-07 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:00.560095 | orchestrator | 2026-03-07 01:08:00 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:00.563601 | orchestrator | 2026-03-07 01:08:00 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:00.565158 | orchestrator | 2026-03-07 01:08:00 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:00.566805 | orchestrator | 2026-03-07 01:08:00 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:00.566828 | orchestrator | 2026-03-07 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:03.610897 | orchestrator | 2026-03-07 01:08:03 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:03.612465 | orchestrator | 2026-03-07 01:08:03 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:03.614566 | orchestrator | 2026-03-07 01:08:03 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:03.615542 | orchestrator | 2026-03-07 01:08:03 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:03.615594 | orchestrator | 2026-03-07 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:06.653247 | orchestrator | 2026-03-07 01:08:06 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:06.653966 | orchestrator | 2026-03-07 01:08:06 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:06.656459 | orchestrator | 2026-03-07 01:08:06 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:06.657341 | orchestrator | 2026-03-07 01:08:06 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:06.657374 | orchestrator | 2026-03-07 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:09.703085 | orchestrator | 2026-03-07 01:08:09 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:09.703223 | orchestrator | 2026-03-07 01:08:09 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:09.703768 | orchestrator | 2026-03-07 01:08:09 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:09.705002 | orchestrator | 2026-03-07 01:08:09 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:09.705046 | orchestrator | 2026-03-07 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:12.756364 | orchestrator | 2026-03-07 01:08:12 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:12.757103 | orchestrator | 2026-03-07 01:08:12 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:12.758401 | orchestrator | 2026-03-07 01:08:12 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:12.760049 | orchestrator | 2026-03-07 01:08:12 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:12.760086 | orchestrator | 2026-03-07 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:15.802730 | orchestrator | 2026-03-07 01:08:15 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:15.804320 | orchestrator | 2026-03-07 01:08:15 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:15.806936 | orchestrator | 2026-03-07 01:08:15 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:15.809897 | orchestrator | 2026-03-07 01:08:15 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:15.809956 | orchestrator | 2026-03-07 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:18.861343 | orchestrator | 2026-03-07 01:08:18 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:18.862926 | orchestrator | 2026-03-07 01:08:18 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:18.864245 | orchestrator | 2026-03-07 01:08:18 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:18.866092 | orchestrator | 2026-03-07 01:08:18 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:18.866312 | orchestrator | 2026-03-07 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:21.908229 | orchestrator | 2026-03-07 01:08:21 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:21.911155 | orchestrator | 2026-03-07 01:08:21 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:21.914295 | orchestrator | 2026-03-07 01:08:21 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:21.915394 | orchestrator | 2026-03-07 01:08:21 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:21.915439 | orchestrator | 2026-03-07 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:24.957679 | orchestrator | 2026-03-07 01:08:24 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:24.959334 | orchestrator | 2026-03-07 01:08:24 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:24.961342 | orchestrator | 2026-03-07 01:08:24 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:24.962703 | orchestrator | 2026-03-07 01:08:24 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:24.962746 | orchestrator | 2026-03-07 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:28.014240 | orchestrator | 2026-03-07 01:08:28 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:28.015895 | orchestrator | 2026-03-07 01:08:28 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:28.019855 | orchestrator | 2026-03-07 01:08:28 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:28.022942 | orchestrator | 2026-03-07 01:08:28 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:28.023004 | orchestrator | 2026-03-07 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:31.082337 | orchestrator | 2026-03-07 01:08:31 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:31.087166 | orchestrator | 2026-03-07 01:08:31 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:31.088761 | orchestrator | 2026-03-07 01:08:31 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:31.091611 | orchestrator | 2026-03-07 01:08:31 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:31.091668 | orchestrator | 2026-03-07 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:34.143464 | orchestrator | 2026-03-07 01:08:34 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:34.145582 | orchestrator | 2026-03-07 01:08:34 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:34.147996 | orchestrator | 2026-03-07 01:08:34 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:34.149837 | orchestrator | 2026-03-07 01:08:34 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:34.149975 | orchestrator | 2026-03-07 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:37.202449 | orchestrator | 2026-03-07 01:08:37 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:37.205002 | orchestrator | 2026-03-07 01:08:37 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:37.206051 | orchestrator | 2026-03-07 01:08:37 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:37.207728 | orchestrator | 2026-03-07 01:08:37 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:37.208452 | orchestrator | 2026-03-07 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:40.261109 | orchestrator | 2026-03-07 01:08:40 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state STARTED 2026-03-07 01:08:40.264043 | orchestrator | 2026-03-07 01:08:40 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:40.268424 | orchestrator | 2026-03-07 01:08:40 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:40.272942 | orchestrator | 2026-03-07 01:08:40 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:40.273024 | orchestrator | 2026-03-07 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:43.322777 | orchestrator | 2026-03-07 01:08:43.322873 | orchestrator | 2026-03-07 01:08:43.322886 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:08:43.322896 | orchestrator | 2026-03-07 01:08:43.322905 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:08:43.322914 | orchestrator | Saturday 07 March 2026 01:07:32 +0000 (0:00:00.302) 0:00:00.302 ******** 2026-03-07 01:08:43.322923 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:08:43.322933 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:08:43.322942 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:08:43.322950 | orchestrator | 2026-03-07 01:08:43.322959 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:08:43.322987 | orchestrator | Saturday 07 March 2026 01:07:32 +0000 (0:00:00.343) 0:00:00.646 ******** 2026-03-07 01:08:43.322997 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-07 01:08:43.323007 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-07 01:08:43.323015 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-07 01:08:43.323025 | orchestrator | 2026-03-07 01:08:43.323033 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-07 01:08:43.323041 | orchestrator | 2026-03-07 01:08:43.323050 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-07 01:08:43.323059 | orchestrator | Saturday 07 March 2026 01:07:32 +0000 (0:00:00.493) 0:00:01.140 ******** 2026-03-07 01:08:43.323068 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:08:43.323078 | orchestrator | 2026-03-07 01:08:43.323086 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-07 01:08:43.323096 | orchestrator | Saturday 07 March 2026 01:07:33 +0000 (0:00:00.636) 0:00:01.777 ******** 2026-03-07 01:08:43.323104 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-07 01:08:43.323113 | orchestrator | 2026-03-07 01:08:43.323122 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-07 01:08:43.323193 | orchestrator | Saturday 07 March 2026 01:07:37 +0000 (0:00:03.694) 0:00:05.471 ******** 2026-03-07 01:08:43.323202 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-07 01:08:43.323212 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-07 01:08:43.323220 | orchestrator | 2026-03-07 01:08:43.323229 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-07 01:08:43.323238 | orchestrator | Saturday 07 March 2026 01:07:44 +0000 (0:00:06.814) 0:00:12.286 ******** 2026-03-07 01:08:43.323247 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:08:43.323279 | orchestrator | 2026-03-07 01:08:43.323289 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-07 01:08:43.323299 | orchestrator | Saturday 07 March 2026 01:07:47 +0000 (0:00:03.368) 0:00:15.655 ******** 2026-03-07 01:08:43.323307 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-07 01:08:43.323316 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:08:43.323326 | orchestrator | 2026-03-07 01:08:43.323335 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-07 01:08:43.323344 | orchestrator | Saturday 07 March 2026 01:07:51 +0000 (0:00:04.138) 0:00:19.794 ******** 2026-03-07 01:08:43.323354 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:08:43.323363 | orchestrator | 2026-03-07 01:08:43.323372 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-07 01:08:43.323380 | orchestrator | Saturday 07 March 2026 01:07:55 +0000 (0:00:03.880) 0:00:23.675 ******** 2026-03-07 01:08:43.323389 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-07 01:08:43.323397 | orchestrator | 2026-03-07 01:08:43.323406 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-07 01:08:43.323415 | orchestrator | Saturday 07 March 2026 01:07:59 +0000 (0:00:04.226) 0:00:27.901 ******** 2026-03-07 01:08:43.323423 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:08:43.323432 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:08:43.323441 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:08:43.323450 | orchestrator | 2026-03-07 01:08:43.323459 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-07 01:08:43.323469 | orchestrator | Saturday 07 March 2026 01:08:00 +0000 (0:00:00.537) 0:00:28.439 ******** 2026-03-07 01:08:43.323481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.323522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.323535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.323552 | orchestrator | 2026-03-07 01:08:43.323562 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-07 01:08:43.323572 | orchestrator | Saturday 07 March 2026 01:08:01 +0000 (0:00:01.495) 0:00:29.935 ******** 2026-03-07 01:08:43.323581 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:08:43.323588 | orchestrator | 2026-03-07 01:08:43.323595 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-07 01:08:43.323601 | orchestrator | Saturday 07 March 2026 01:08:01 +0000 (0:00:00.132) 0:00:30.067 ******** 2026-03-07 01:08:43.323607 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:08:43.323613 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:08:43.323618 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:08:43.323624 | orchestrator | 2026-03-07 01:08:43.323629 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-07 01:08:43.323636 | orchestrator | Saturday 07 March 2026 01:08:02 +0000 (0:00:00.637) 0:00:30.704 ******** 2026-03-07 01:08:43.323645 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:08:43.323654 | orchestrator | 2026-03-07 01:08:43.323663 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-07 01:08:43.323672 | orchestrator | Saturday 07 March 2026 01:08:03 +0000 (0:00:00.602) 0:00:31.307 ******** 2026-03-07 01:08:43.323683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.323703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.323718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.323734 | orchestrator | 2026-03-07 01:08:43.323743 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-07 01:08:43.323753 | orchestrator | Saturday 07 March 2026 01:08:04 +0000 (0:00:01.678) 0:00:32.985 ******** 2026-03-07 01:08:43.323762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:08:43.323772 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:08:43.323781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:08:43.323791 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:08:43.323806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:08:43.323816 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:08:43.323825 | orchestrator | 2026-03-07 01:08:43.323835 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-07 01:08:43.323841 | orchestrator | Saturday 07 March 2026 01:08:05 +0000 (0:00:00.804) 0:00:33.790 ******** 2026-03-07 01:08:43.323855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:08:43.323861 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:08:43.323867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:08:43.323872 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:08:43.323878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:08:43.323883 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:08:43.323889 | orchestrator | 2026-03-07 01:08:43.323894 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-07 01:08:43.323900 | orchestrator | Saturday 07 March 2026 01:08:06 +0000 (0:00:00.813) 0:00:34.603 ******** 2026-03-07 01:08:43.323909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.323923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.323931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.323941 | orchestrator | 2026-03-07 01:08:43.323951 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-07 01:08:43.323961 | orchestrator | Saturday 07 March 2026 01:08:07 +0000 (0:00:01.406) 0:00:36.009 ******** 2026-03-07 01:08:43.323967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.323972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.324049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.324067 | orchestrator | 2026-03-07 01:08:43.324076 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-07 01:08:43.324085 | orchestrator | Saturday 07 March 2026 01:08:10 +0000 (0:00:02.825) 0:00:38.835 ******** 2026-03-07 01:08:43.324095 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-07 01:08:43.324106 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-07 01:08:43.324116 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-07 01:08:43.324143 | orchestrator | 2026-03-07 01:08:43.324152 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-07 01:08:43.324162 | orchestrator | Saturday 07 March 2026 01:08:12 +0000 (0:00:01.510) 0:00:40.345 ******** 2026-03-07 01:08:43.324171 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:08:43.324180 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:08:43.324189 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:08:43.324196 | orchestrator | 2026-03-07 01:08:43.324201 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-07 01:08:43.324207 | orchestrator | Saturday 07 March 2026 01:08:13 +0000 (0:00:01.362) 0:00:41.708 ******** 2026-03-07 01:08:43.324213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:08:43.324218 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:08:43.324224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:08:43.324245 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:08:43.324257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:08:43.324263 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:08:43.324272 | orchestrator | 2026-03-07 01:08:43.324280 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-07 01:08:43.324289 | orchestrator | Saturday 07 March 2026 01:08:14 +0000 (0:00:00.548) 0:00:42.257 ******** 2026-03-07 01:08:43.324304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.324314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.324324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:08:43.324342 | orchestrator | 2026-03-07 01:08:43.324349 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-07 01:08:43.324354 | orchestrator | Saturday 07 March 2026 01:08:15 +0000 (0:00:01.153) 0:00:43.410 ******** 2026-03-07 01:08:43.324360 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:08:43.324365 | orchestrator | 2026-03-07 01:08:43.324371 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-07 01:08:43.324376 | orchestrator | Saturday 07 March 2026 01:08:17 +0000 (0:00:02.632) 0:00:46.043 ******** 2026-03-07 01:08:43.324381 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:08:43.324387 | orchestrator | 2026-03-07 01:08:43.324392 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-07 01:08:43.324397 | orchestrator | Saturday 07 March 2026 01:08:20 +0000 (0:00:02.501) 0:00:48.544 ******** 2026-03-07 01:08:43.324403 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:08:43.324408 | orchestrator | 2026-03-07 01:08:43.324414 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-07 01:08:43.324419 | orchestrator | Saturday 07 March 2026 01:08:34 +0000 (0:00:14.298) 0:01:02.843 ******** 2026-03-07 01:08:43.324424 | orchestrator | 2026-03-07 01:08:43.324430 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-07 01:08:43.324435 | orchestrator | Saturday 07 March 2026 01:08:34 +0000 (0:00:00.078) 0:01:02.922 ******** 2026-03-07 01:08:43.324441 | orchestrator | 2026-03-07 01:08:43.324450 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-07 01:08:43.324456 | orchestrator | Saturday 07 March 2026 01:08:34 +0000 (0:00:00.075) 0:01:02.997 ******** 2026-03-07 01:08:43.324461 | orchestrator | 2026-03-07 01:08:43.324466 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-07 01:08:43.324474 | orchestrator | Saturday 07 March 2026 01:08:34 +0000 (0:00:00.072) 0:01:03.070 ******** 2026-03-07 01:08:43.324483 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:08:43.324492 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:08:43.324502 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:08:43.324508 | orchestrator | 2026-03-07 01:08:43.324513 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:08:43.324523 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:08:43.324530 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:08:43.324536 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:08:43.324541 | orchestrator | 2026-03-07 01:08:43.324547 | orchestrator | 2026-03-07 01:08:43.324552 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:08:43.324557 | orchestrator | Saturday 07 March 2026 01:08:40 +0000 (0:00:05.399) 0:01:08.469 ******** 2026-03-07 01:08:43.324563 | orchestrator | =============================================================================== 2026-03-07 01:08:43.324568 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.30s 2026-03-07 01:08:43.324573 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.81s 2026-03-07 01:08:43.324579 | orchestrator | placement : Restart placement-api container ----------------------------- 5.40s 2026-03-07 01:08:43.324588 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.23s 2026-03-07 01:08:43.324597 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.14s 2026-03-07 01:08:43.324607 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.88s 2026-03-07 01:08:43.324617 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.69s 2026-03-07 01:08:43.324634 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.37s 2026-03-07 01:08:43.324643 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.83s 2026-03-07 01:08:43.324653 | orchestrator | placement : Creating placement databases -------------------------------- 2.63s 2026-03-07 01:08:43.324663 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.50s 2026-03-07 01:08:43.324669 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.68s 2026-03-07 01:08:43.324674 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.51s 2026-03-07 01:08:43.324680 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.50s 2026-03-07 01:08:43.324685 | orchestrator | placement : Copying over config.json files for services ----------------- 1.41s 2026-03-07 01:08:43.324690 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.36s 2026-03-07 01:08:43.324696 | orchestrator | placement : Check placement containers ---------------------------------- 1.15s 2026-03-07 01:08:43.324701 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.81s 2026-03-07 01:08:43.324706 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.80s 2026-03-07 01:08:43.324712 | orchestrator | placement : Set placement policy file ----------------------------------- 0.64s 2026-03-07 01:08:43.324718 | orchestrator | 2026-03-07 01:08:43 | INFO  | Task f8f426f5-2e46-4aea-be60-c236b1a43b38 is in state SUCCESS 2026-03-07 01:08:43.324797 | orchestrator | 2026-03-07 01:08:43 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:43.326550 | orchestrator | 2026-03-07 01:08:43 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:43.332326 | orchestrator | 2026-03-07 01:08:43 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:08:43.335603 | orchestrator | 2026-03-07 01:08:43 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:43.335666 | orchestrator | 2026-03-07 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:46.433464 | orchestrator | 2026-03-07 01:08:46 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:46.434909 | orchestrator | 2026-03-07 01:08:46 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:46.439008 | orchestrator | 2026-03-07 01:08:46 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:08:46.441344 | orchestrator | 2026-03-07 01:08:46 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:46.441419 | orchestrator | 2026-03-07 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:49.477267 | orchestrator | 2026-03-07 01:08:49 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:49.479278 | orchestrator | 2026-03-07 01:08:49 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:49.482609 | orchestrator | 2026-03-07 01:08:49 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:08:49.485336 | orchestrator | 2026-03-07 01:08:49 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:49.485691 | orchestrator | 2026-03-07 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:52.530488 | orchestrator | 2026-03-07 01:08:52 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:52.531986 | orchestrator | 2026-03-07 01:08:52 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:52.534453 | orchestrator | 2026-03-07 01:08:52 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:08:52.537073 | orchestrator | 2026-03-07 01:08:52 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:52.537336 | orchestrator | 2026-03-07 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:55.580930 | orchestrator | 2026-03-07 01:08:55 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:55.583024 | orchestrator | 2026-03-07 01:08:55 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:55.583444 | orchestrator | 2026-03-07 01:08:55 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:08:55.584832 | orchestrator | 2026-03-07 01:08:55 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:55.585060 | orchestrator | 2026-03-07 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:58.615114 | orchestrator | 2026-03-07 01:08:58 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:08:58.616448 | orchestrator | 2026-03-07 01:08:58 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:08:58.618285 | orchestrator | 2026-03-07 01:08:58 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:08:58.619247 | orchestrator | 2026-03-07 01:08:58 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:08:58.619273 | orchestrator | 2026-03-07 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:01.678994 | orchestrator | 2026-03-07 01:09:01 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:09:01.680499 | orchestrator | 2026-03-07 01:09:01 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:01.683002 | orchestrator | 2026-03-07 01:09:01 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:09:01.684976 | orchestrator | 2026-03-07 01:09:01 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:01.688290 | orchestrator | 2026-03-07 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:04.726722 | orchestrator | 2026-03-07 01:09:04 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:09:04.728347 | orchestrator | 2026-03-07 01:09:04 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:04.729639 | orchestrator | 2026-03-07 01:09:04 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:09:04.730851 | orchestrator | 2026-03-07 01:09:04 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:04.731090 | orchestrator | 2026-03-07 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:07.783216 | orchestrator | 2026-03-07 01:09:07 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:09:07.784291 | orchestrator | 2026-03-07 01:09:07 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:07.786762 | orchestrator | 2026-03-07 01:09:07 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:09:07.789041 | orchestrator | 2026-03-07 01:09:07 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:07.789113 | orchestrator | 2026-03-07 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:10.831372 | orchestrator | 2026-03-07 01:09:10 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:09:10.832048 | orchestrator | 2026-03-07 01:09:10 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:10.834122 | orchestrator | 2026-03-07 01:09:10 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:09:10.835449 | orchestrator | 2026-03-07 01:09:10 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:10.835502 | orchestrator | 2026-03-07 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:13.880658 | orchestrator | 2026-03-07 01:09:13 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:09:13.887668 | orchestrator | 2026-03-07 01:09:13 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:13.891327 | orchestrator | 2026-03-07 01:09:13 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:09:13.894235 | orchestrator | 2026-03-07 01:09:13 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:13.894320 | orchestrator | 2026-03-07 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:16.936842 | orchestrator | 2026-03-07 01:09:16 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:09:16.938190 | orchestrator | 2026-03-07 01:09:16 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:16.941018 | orchestrator | 2026-03-07 01:09:16 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:09:16.942430 | orchestrator | 2026-03-07 01:09:16 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:16.942480 | orchestrator | 2026-03-07 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:19.990345 | orchestrator | 2026-03-07 01:09:19 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:09:19.991785 | orchestrator | 2026-03-07 01:09:19 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:19.995220 | orchestrator | 2026-03-07 01:09:19 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state STARTED 2026-03-07 01:09:19.995289 | orchestrator | 2026-03-07 01:09:19 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:19.995303 | orchestrator | 2026-03-07 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:23.045583 | orchestrator | 2026-03-07 01:09:23 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:09:23.049182 | orchestrator | 2026-03-07 01:09:23 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:23.050243 | orchestrator | 2026-03-07 01:09:23 | INFO  | Task 7da988d9-fb65-4758-842f-cb8ac35d35ef is in state SUCCESS 2026-03-07 01:09:23.052124 | orchestrator | 2026-03-07 01:09:23 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:23.052240 | orchestrator | 2026-03-07 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:26.094275 | orchestrator | 2026-03-07 01:09:26 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:09:26.094773 | orchestrator | 2026-03-07 01:09:26 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:09:26.096380 | orchestrator | 2026-03-07 01:09:26 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:26.098347 | orchestrator | 2026-03-07 01:09:26 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:26.098402 | orchestrator | 2026-03-07 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:29.155959 | orchestrator | 2026-03-07 01:09:29 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state STARTED 2026-03-07 01:09:29.156807 | orchestrator | 2026-03-07 01:09:29 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:09:29.163191 | orchestrator | 2026-03-07 01:09:29 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:29.164570 | orchestrator | 2026-03-07 01:09:29 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:29.164610 | orchestrator | 2026-03-07 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:32.211994 | orchestrator | 2026-03-07 01:09:32 | INFO  | Task dd436db4-c0e4-4e80-8902-e092306d4084 is in state SUCCESS 2026-03-07 01:09:32.213911 | orchestrator | 2026-03-07 01:09:32.213967 | orchestrator | 2026-03-07 01:09:32.213981 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:09:32.213993 | orchestrator | 2026-03-07 01:09:32.214006 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:09:32.214063 | orchestrator | Saturday 07 March 2026 01:08:46 +0000 (0:00:00.359) 0:00:00.359 ******** 2026-03-07 01:09:32.214078 | orchestrator | ok: [testbed-manager] 2026-03-07 01:09:32.214087 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:09:32.214094 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:09:32.214100 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:09:32.214107 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:09:32.214114 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:09:32.214121 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:09:32.214128 | orchestrator | 2026-03-07 01:09:32.214172 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:09:32.214181 | orchestrator | Saturday 07 March 2026 01:08:47 +0000 (0:00:01.077) 0:00:01.437 ******** 2026-03-07 01:09:32.214189 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-07 01:09:32.214196 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-07 01:09:32.214203 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-07 01:09:32.214210 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-07 01:09:32.214217 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-07 01:09:32.214223 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-07 01:09:32.214230 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-07 01:09:32.214237 | orchestrator | 2026-03-07 01:09:32.214244 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-07 01:09:32.214251 | orchestrator | 2026-03-07 01:09:32.214258 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-07 01:09:32.214264 | orchestrator | Saturday 07 March 2026 01:08:47 +0000 (0:00:00.858) 0:00:02.295 ******** 2026-03-07 01:09:32.214273 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:09:32.214281 | orchestrator | 2026-03-07 01:09:32.214288 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-07 01:09:32.214294 | orchestrator | Saturday 07 March 2026 01:08:49 +0000 (0:00:01.903) 0:00:04.199 ******** 2026-03-07 01:09:32.214301 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-07 01:09:32.214308 | orchestrator | 2026-03-07 01:09:32.214315 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-07 01:09:32.214321 | orchestrator | Saturday 07 March 2026 01:08:53 +0000 (0:00:03.764) 0:00:07.963 ******** 2026-03-07 01:09:32.214328 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-07 01:09:32.214337 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-07 01:09:32.214366 | orchestrator | 2026-03-07 01:09:32.214373 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-07 01:09:32.214379 | orchestrator | Saturday 07 March 2026 01:09:01 +0000 (0:00:08.164) 0:00:16.128 ******** 2026-03-07 01:09:32.214386 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-07 01:09:32.214393 | orchestrator | 2026-03-07 01:09:32.214399 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-07 01:09:32.214406 | orchestrator | Saturday 07 March 2026 01:09:05 +0000 (0:00:03.719) 0:00:19.847 ******** 2026-03-07 01:09:32.214413 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-07 01:09:32.214420 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:09:32.214426 | orchestrator | 2026-03-07 01:09:32.214434 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-07 01:09:32.214445 | orchestrator | Saturday 07 March 2026 01:09:09 +0000 (0:00:04.191) 0:00:24.039 ******** 2026-03-07 01:09:32.214456 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-07 01:09:32.214473 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-07 01:09:32.214485 | orchestrator | 2026-03-07 01:09:32.214496 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-07 01:09:32.214507 | orchestrator | Saturday 07 March 2026 01:09:16 +0000 (0:00:07.161) 0:00:31.201 ******** 2026-03-07 01:09:32.214518 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-07 01:09:32.214530 | orchestrator | 2026-03-07 01:09:32.214540 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:09:32.214552 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:09:32.214564 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:09:32.214576 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:09:32.214589 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:09:32.214601 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:09:32.214631 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:09:32.214643 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:09:32.214654 | orchestrator | 2026-03-07 01:09:32.214665 | orchestrator | 2026-03-07 01:09:32.214676 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:09:32.214688 | orchestrator | Saturday 07 March 2026 01:09:22 +0000 (0:00:05.321) 0:00:36.522 ******** 2026-03-07 01:09:32.214700 | orchestrator | =============================================================================== 2026-03-07 01:09:32.214712 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 8.16s 2026-03-07 01:09:32.214731 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.16s 2026-03-07 01:09:32.214740 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.32s 2026-03-07 01:09:32.214748 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.19s 2026-03-07 01:09:32.214755 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.76s 2026-03-07 01:09:32.214763 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.72s 2026-03-07 01:09:32.214773 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.90s 2026-03-07 01:09:32.214795 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.08s 2026-03-07 01:09:32.214805 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2026-03-07 01:09:32.214820 | orchestrator | 2026-03-07 01:09:32.214838 | orchestrator | 2026-03-07 01:09:32.214848 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:09:32.214858 | orchestrator | 2026-03-07 01:09:32.214868 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:09:32.214880 | orchestrator | Saturday 07 March 2026 01:03:11 +0000 (0:00:00.800) 0:00:00.800 ******** 2026-03-07 01:09:32.214890 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:09:32.214900 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:09:32.214910 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:09:32.214920 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:09:32.214930 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:09:32.214940 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:09:32.214950 | orchestrator | 2026-03-07 01:09:32.214961 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:09:32.214972 | orchestrator | Saturday 07 March 2026 01:03:13 +0000 (0:00:02.196) 0:00:02.996 ******** 2026-03-07 01:09:32.214983 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-07 01:09:32.214993 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-07 01:09:32.215003 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-07 01:09:32.215013 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-07 01:09:32.215024 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-07 01:09:32.215034 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-07 01:09:32.215045 | orchestrator | 2026-03-07 01:09:32.215057 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-07 01:09:32.215069 | orchestrator | 2026-03-07 01:09:32.215079 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-07 01:09:32.215090 | orchestrator | Saturday 07 March 2026 01:03:14 +0000 (0:00:01.293) 0:00:04.290 ******** 2026-03-07 01:09:32.215102 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:09:32.215112 | orchestrator | 2026-03-07 01:09:32.215123 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-07 01:09:32.215154 | orchestrator | Saturday 07 March 2026 01:03:16 +0000 (0:00:01.535) 0:00:05.825 ******** 2026-03-07 01:09:32.215167 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:09:32.215178 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:09:32.215188 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:09:32.215201 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:09:32.215209 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:09:32.215220 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:09:32.215230 | orchestrator | 2026-03-07 01:09:32.215247 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-07 01:09:32.215260 | orchestrator | Saturday 07 March 2026 01:03:18 +0000 (0:00:01.948) 0:00:07.774 ******** 2026-03-07 01:09:32.215271 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:09:32.215282 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:09:32.215292 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:09:32.215302 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:09:32.215312 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:09:32.215323 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:09:32.215333 | orchestrator | 2026-03-07 01:09:32.215345 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-07 01:09:32.215356 | orchestrator | Saturday 07 March 2026 01:03:20 +0000 (0:00:02.164) 0:00:09.938 ******** 2026-03-07 01:09:32.215368 | orchestrator | ok: [testbed-node-0] => { 2026-03-07 01:09:32.215380 | orchestrator |  "changed": false, 2026-03-07 01:09:32.215391 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:09:32.215412 | orchestrator | } 2026-03-07 01:09:32.215420 | orchestrator | ok: [testbed-node-1] => { 2026-03-07 01:09:32.215426 | orchestrator |  "changed": false, 2026-03-07 01:09:32.215433 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:09:32.215440 | orchestrator | } 2026-03-07 01:09:32.215448 | orchestrator | ok: [testbed-node-2] => { 2026-03-07 01:09:32.215459 | orchestrator |  "changed": false, 2026-03-07 01:09:32.215478 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:09:32.215488 | orchestrator | } 2026-03-07 01:09:32.215498 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 01:09:32.215511 | orchestrator |  "changed": false, 2026-03-07 01:09:32.215523 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:09:32.215533 | orchestrator | } 2026-03-07 01:09:32.215545 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 01:09:32.215552 | orchestrator |  "changed": false, 2026-03-07 01:09:32.215558 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:09:32.215565 | orchestrator | } 2026-03-07 01:09:32.215572 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 01:09:32.215587 | orchestrator |  "changed": false, 2026-03-07 01:09:32.215594 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:09:32.215600 | orchestrator | } 2026-03-07 01:09:32.215607 | orchestrator | 2026-03-07 01:09:32.215614 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-07 01:09:32.215621 | orchestrator | Saturday 07 March 2026 01:03:21 +0000 (0:00:01.162) 0:00:11.101 ******** 2026-03-07 01:09:32.215628 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.215634 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.215641 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.215647 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.215654 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.215661 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.215667 | orchestrator | 2026-03-07 01:09:32.215674 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-07 01:09:32.215688 | orchestrator | Saturday 07 March 2026 01:03:22 +0000 (0:00:00.764) 0:00:11.866 ******** 2026-03-07 01:09:32.215695 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-07 01:09:32.215701 | orchestrator | 2026-03-07 01:09:32.215708 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-07 01:09:32.215715 | orchestrator | Saturday 07 March 2026 01:03:26 +0000 (0:00:03.897) 0:00:15.763 ******** 2026-03-07 01:09:32.215722 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-07 01:09:32.215729 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-07 01:09:32.215809 | orchestrator | 2026-03-07 01:09:32.215819 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-07 01:09:32.215826 | orchestrator | Saturday 07 March 2026 01:03:32 +0000 (0:00:06.237) 0:00:22.001 ******** 2026-03-07 01:09:32.215833 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:09:32.215840 | orchestrator | 2026-03-07 01:09:32.215856 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-07 01:09:32.215867 | orchestrator | Saturday 07 March 2026 01:03:35 +0000 (0:00:02.934) 0:00:24.936 ******** 2026-03-07 01:09:32.215878 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-07 01:09:32.215889 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:09:32.215900 | orchestrator | 2026-03-07 01:09:32.215910 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-07 01:09:32.215920 | orchestrator | Saturday 07 March 2026 01:03:39 +0000 (0:00:03.941) 0:00:28.877 ******** 2026-03-07 01:09:32.215930 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:09:32.215940 | orchestrator | 2026-03-07 01:09:32.215950 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-07 01:09:32.215960 | orchestrator | Saturday 07 March 2026 01:03:44 +0000 (0:00:04.430) 0:00:33.308 ******** 2026-03-07 01:09:32.215981 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-07 01:09:32.215993 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-07 01:09:32.216004 | orchestrator | 2026-03-07 01:09:32.216014 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-07 01:09:32.216024 | orchestrator | Saturday 07 March 2026 01:03:53 +0000 (0:00:09.744) 0:00:43.052 ******** 2026-03-07 01:09:32.216035 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.216045 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.216056 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.216067 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.216077 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.216089 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.216098 | orchestrator | 2026-03-07 01:09:32.216108 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-07 01:09:32.216118 | orchestrator | Saturday 07 March 2026 01:03:56 +0000 (0:00:03.086) 0:00:46.139 ******** 2026-03-07 01:09:32.216130 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.216201 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.216214 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.216225 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.216236 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.216247 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.216258 | orchestrator | 2026-03-07 01:09:32.216269 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-07 01:09:32.216277 | orchestrator | Saturday 07 March 2026 01:04:01 +0000 (0:00:04.977) 0:00:51.117 ******** 2026-03-07 01:09:32.216285 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:09:32.216296 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:09:32.216313 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:09:32.216324 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:09:32.216335 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:09:32.216346 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:09:32.216356 | orchestrator | 2026-03-07 01:09:32.216367 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-07 01:09:32.216377 | orchestrator | Saturday 07 March 2026 01:04:03 +0000 (0:00:01.516) 0:00:52.633 ******** 2026-03-07 01:09:32.216387 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.216397 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.216407 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.216417 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.216428 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.216439 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.216450 | orchestrator | 2026-03-07 01:09:32.216461 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-07 01:09:32.216473 | orchestrator | Saturday 07 March 2026 01:04:06 +0000 (0:00:03.591) 0:00:56.224 ******** 2026-03-07 01:09:32.216524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.216540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.216562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.216575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.216588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.216611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.216632 | orchestrator | 2026-03-07 01:09:32.216648 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-07 01:09:32.216660 | orchestrator | Saturday 07 March 2026 01:04:13 +0000 (0:00:06.578) 0:01:02.803 ******** 2026-03-07 01:09:32.216668 | orchestrator | [WARNING]: Skipped 2026-03-07 01:09:32.216675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-07 01:09:32.216683 | orchestrator | due to this access issue: 2026-03-07 01:09:32.216689 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-07 01:09:32.216696 | orchestrator | a directory 2026-03-07 01:09:32.216703 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:09:32.216709 | orchestrator | 2026-03-07 01:09:32.216716 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-07 01:09:32.216725 | orchestrator | Saturday 07 March 2026 01:04:15 +0000 (0:00:01.595) 0:01:04.398 ******** 2026-03-07 01:09:32.216740 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:09:32.216757 | orchestrator | 2026-03-07 01:09:32.216768 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-07 01:09:32.216778 | orchestrator | Saturday 07 March 2026 01:04:16 +0000 (0:00:01.695) 0:01:06.094 ******** 2026-03-07 01:09:32.216789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.216801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.216813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.216842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.216863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.216876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.216888 | orchestrator | 2026-03-07 01:09:32.216899 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-07 01:09:32.216910 | orchestrator | Saturday 07 March 2026 01:04:22 +0000 (0:00:05.243) 0:01:11.337 ******** 2026-03-07 01:09:32.216918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.216930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.216944 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.216951 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.216967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.216974 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.216981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.216989 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.216996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217002 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.217009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217016 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.217023 | orchestrator | 2026-03-07 01:09:32.217036 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-07 01:09:32.217043 | orchestrator | Saturday 07 March 2026 01:04:30 +0000 (0:00:08.492) 0:01:19.830 ******** 2026-03-07 01:09:32.217059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.217066 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.217077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.217087 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.217099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217109 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.217116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.217123 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.217130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217195 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.217212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217220 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.217226 | orchestrator | 2026-03-07 01:09:32.217233 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-07 01:09:32.217240 | orchestrator | Saturday 07 March 2026 01:04:38 +0000 (0:00:07.961) 0:01:27.791 ******** 2026-03-07 01:09:32.217247 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.217254 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.217260 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.217267 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.217274 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.217280 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.217287 | orchestrator | 2026-03-07 01:09:32.217293 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-07 01:09:32.217300 | orchestrator | Saturday 07 March 2026 01:04:43 +0000 (0:00:04.558) 0:01:32.349 ******** 2026-03-07 01:09:32.217307 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.217313 | orchestrator | 2026-03-07 01:09:32.217320 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-07 01:09:32.217327 | orchestrator | Saturday 07 March 2026 01:04:43 +0000 (0:00:00.157) 0:01:32.507 ******** 2026-03-07 01:09:32.217333 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.217340 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.217347 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.217353 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.217360 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.217366 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.217373 | orchestrator | 2026-03-07 01:09:32.217380 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-07 01:09:32.217387 | orchestrator | Saturday 07 March 2026 01:04:44 +0000 (0:00:01.586) 0:01:34.094 ******** 2026-03-07 01:09:32.217394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.217409 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.217416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.217424 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.217435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217442 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.217453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.217460 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.217467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217474 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.217484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217503 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.217514 | orchestrator | 2026-03-07 01:09:32.217526 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-07 01:09:32.217537 | orchestrator | Saturday 07 March 2026 01:04:48 +0000 (0:00:04.198) 0:01:38.292 ******** 2026-03-07 01:09:32.217548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.217569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.217577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.217584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.217597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.217604 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.217611 | orchestrator | 2026-03-07 01:09:32.217618 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-07 01:09:32.217624 | orchestrator | Saturday 07 March 2026 01:04:54 +0000 (0:00:05.771) 0:01:44.064 ******** 2026-03-07 01:09:32.217640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.217648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.217660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.217667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.217674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.217689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.217697 | orchestrator | 2026-03-07 01:09:32.217704 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-07 01:09:32.217710 | orchestrator | Saturday 07 March 2026 01:05:04 +0000 (0:00:10.027) 0:01:54.091 ******** 2026-03-07 01:09:32.217717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.217729 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.217736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.217743 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.217750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.217757 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.217768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217775 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.217809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217816 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.217823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217838 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.217845 | orchestrator | 2026-03-07 01:09:32.217851 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-07 01:09:32.217858 | orchestrator | Saturday 07 March 2026 01:05:11 +0000 (0:00:06.373) 0:02:00.465 ******** 2026-03-07 01:09:32.217865 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.217871 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.217878 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.217885 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:32.217891 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:09:32.217898 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:09:32.217904 | orchestrator | 2026-03-07 01:09:32.217911 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-07 01:09:32.217918 | orchestrator | Saturday 07 March 2026 01:05:14 +0000 (0:00:02.962) 0:02:03.427 ******** 2026-03-07 01:09:32.217925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217932 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.217939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217946 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.217960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.217973 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.217980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.217988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.217995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.218001 | orchestrator | 2026-03-07 01:09:32.218008 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-07 01:09:32.218064 | orchestrator | Saturday 07 March 2026 01:05:22 +0000 (0:00:08.130) 0:02:11.558 ******** 2026-03-07 01:09:32.218073 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218081 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.218088 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218095 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.218101 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.218108 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.218115 | orchestrator | 2026-03-07 01:09:32.218121 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-07 01:09:32.218128 | orchestrator | Saturday 07 March 2026 01:05:26 +0000 (0:00:04.234) 0:02:15.793 ******** 2026-03-07 01:09:32.218155 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218167 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.218177 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218184 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.218191 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.218218 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.218225 | orchestrator | 2026-03-07 01:09:32.218232 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-07 01:09:32.218239 | orchestrator | Saturday 07 March 2026 01:05:30 +0000 (0:00:03.927) 0:02:19.720 ******** 2026-03-07 01:09:32.218245 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218252 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.218258 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.218265 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218272 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.218278 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.218285 | orchestrator | 2026-03-07 01:09:32.218291 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-07 01:09:32.218302 | orchestrator | Saturday 07 March 2026 01:05:35 +0000 (0:00:04.831) 0:02:24.552 ******** 2026-03-07 01:09:32.218309 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.218316 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218322 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.218328 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218335 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.218341 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.218348 | orchestrator | 2026-03-07 01:09:32.218354 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-07 01:09:32.218361 | orchestrator | Saturday 07 March 2026 01:05:39 +0000 (0:00:04.353) 0:02:28.905 ******** 2026-03-07 01:09:32.218368 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.218374 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.218381 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218387 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.218394 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.218400 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218407 | orchestrator | 2026-03-07 01:09:32.218414 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-07 01:09:32.218420 | orchestrator | Saturday 07 March 2026 01:05:42 +0000 (0:00:02.561) 0:02:31.466 ******** 2026-03-07 01:09:32.218427 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218433 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218440 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.218446 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.218453 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.218459 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.218466 | orchestrator | 2026-03-07 01:09:32.218472 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-07 01:09:32.218479 | orchestrator | Saturday 07 March 2026 01:05:45 +0000 (0:00:03.126) 0:02:34.593 ******** 2026-03-07 01:09:32.218485 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:09:32.218492 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218499 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:09:32.218506 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218512 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:09:32.218519 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.218525 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:09:32.218532 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.218538 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:09:32.218545 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.218552 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:09:32.218558 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.218571 | orchestrator | 2026-03-07 01:09:32.218578 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-07 01:09:32.218585 | orchestrator | Saturday 07 March 2026 01:05:51 +0000 (0:00:06.515) 0:02:41.108 ******** 2026-03-07 01:09:32.218592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.218599 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.218620 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.218637 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.218644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.218651 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.218663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.218670 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.218677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.218684 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.218690 | orchestrator | 2026-03-07 01:09:32.218697 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-07 01:09:32.218704 | orchestrator | Saturday 07 March 2026 01:05:56 +0000 (0:00:04.379) 0:02:45.488 ******** 2026-03-07 01:09:32.218718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.218725 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.218739 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.218746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.218757 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.218764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.218771 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.218790 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.218800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.218807 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.218814 | orchestrator | 2026-03-07 01:09:32.218820 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-07 01:09:32.218830 | orchestrator | Saturday 07 March 2026 01:06:01 +0000 (0:00:05.590) 0:02:51.078 ******** 2026-03-07 01:09:32.218843 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218850 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.218856 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218863 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.218870 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.218877 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.218883 | orchestrator | 2026-03-07 01:09:32.218890 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-07 01:09:32.218901 | orchestrator | Saturday 07 March 2026 01:06:05 +0000 (0:00:04.098) 0:02:55.177 ******** 2026-03-07 01:09:32.218908 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218915 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218921 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.218928 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:09:32.218934 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:09:32.218941 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:09:32.218948 | orchestrator | 2026-03-07 01:09:32.218954 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-07 01:09:32.218961 | orchestrator | Saturday 07 March 2026 01:06:11 +0000 (0:00:06.037) 0:03:01.214 ******** 2026-03-07 01:09:32.218968 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.218974 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.218981 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.218988 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.218994 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.219001 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.219008 | orchestrator | 2026-03-07 01:09:32.219014 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-07 01:09:32.219021 | orchestrator | Saturday 07 March 2026 01:06:18 +0000 (0:00:07.084) 0:03:08.299 ******** 2026-03-07 01:09:32.219028 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.219034 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.219041 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.219047 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.219054 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.219061 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.219067 | orchestrator | 2026-03-07 01:09:32.219074 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-07 01:09:32.219081 | orchestrator | Saturday 07 March 2026 01:06:25 +0000 (0:00:06.979) 0:03:15.279 ******** 2026-03-07 01:09:32.219087 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.219094 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.219100 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.219107 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.219114 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.219120 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.219127 | orchestrator | 2026-03-07 01:09:32.219149 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-07 01:09:32.219157 | orchestrator | Saturday 07 March 2026 01:06:32 +0000 (0:00:06.508) 0:03:21.787 ******** 2026-03-07 01:09:32.219163 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.219170 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.219177 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.219183 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.219190 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.219196 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.219203 | orchestrator | 2026-03-07 01:09:32.219210 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-07 01:09:32.219216 | orchestrator | Saturday 07 March 2026 01:06:36 +0000 (0:00:03.760) 0:03:25.548 ******** 2026-03-07 01:09:32.219223 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.219230 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.219236 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.219243 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.219250 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.219256 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.219263 | orchestrator | 2026-03-07 01:09:32.219269 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-07 01:09:32.219276 | orchestrator | Saturday 07 March 2026 01:06:40 +0000 (0:00:04.667) 0:03:30.215 ******** 2026-03-07 01:09:32.219283 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.219295 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.219302 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.219308 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.219315 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.219322 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.219333 | orchestrator | 2026-03-07 01:09:32.219349 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-07 01:09:32.219360 | orchestrator | Saturday 07 March 2026 01:06:44 +0000 (0:00:03.186) 0:03:33.402 ******** 2026-03-07 01:09:32.219370 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.219381 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.219392 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.219402 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.219413 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.219425 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.219435 | orchestrator | 2026-03-07 01:09:32.219445 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-07 01:09:32.219456 | orchestrator | Saturday 07 March 2026 01:06:47 +0000 (0:00:03.292) 0:03:36.695 ******** 2026-03-07 01:09:32.219474 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:09:32.219486 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.219497 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:09:32.219509 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.219520 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:09:32.219531 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.219543 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:09:32.219555 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.219566 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:09:32.219578 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.219589 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:09:32.219601 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.219611 | orchestrator | 2026-03-07 01:09:32.219618 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-07 01:09:32.219624 | orchestrator | Saturday 07 March 2026 01:06:52 +0000 (0:00:05.040) 0:03:41.735 ******** 2026-03-07 01:09:32.219632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.219639 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.219646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.219662 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.219684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.219691 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.219705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:09:32.219712 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.219719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.219726 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.219733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:09:32.219740 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.219752 | orchestrator | 2026-03-07 01:09:32.219759 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-07 01:09:32.219766 | orchestrator | Saturday 07 March 2026 01:06:55 +0000 (0:00:02.709) 0:03:44.444 ******** 2026-03-07 01:09:32.219772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.219784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.219795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.219802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:09:32.219809 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.219821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:09:32.219828 | orchestrator | 2026-03-07 01:09:32.219835 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-07 01:09:32.219842 | orchestrator | Saturday 07 March 2026 01:06:59 +0000 (0:00:04.282) 0:03:48.726 ******** 2026-03-07 01:09:32.219848 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:32.219855 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:32.219861 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:32.219868 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:09:32.219875 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:09:32.219881 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:09:32.219888 | orchestrator | 2026-03-07 01:09:32.219894 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-07 01:09:32.219901 | orchestrator | Saturday 07 March 2026 01:07:00 +0000 (0:00:00.806) 0:03:49.533 ******** 2026-03-07 01:09:32.219912 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:32.219919 | orchestrator | 2026-03-07 01:09:32.219925 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-07 01:09:32.219932 | orchestrator | Saturday 07 March 2026 01:07:02 +0000 (0:00:02.298) 0:03:51.831 ******** 2026-03-07 01:09:32.219939 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:32.219945 | orchestrator | 2026-03-07 01:09:32.219952 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-07 01:09:32.219959 | orchestrator | Saturday 07 March 2026 01:07:05 +0000 (0:00:02.742) 0:03:54.574 ******** 2026-03-07 01:09:32.219965 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:32.219972 | orchestrator | 2026-03-07 01:09:32.219978 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:09:32.219988 | orchestrator | Saturday 07 March 2026 01:07:53 +0000 (0:00:47.946) 0:04:42.520 ******** 2026-03-07 01:09:32.219996 | orchestrator | 2026-03-07 01:09:32.220002 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:09:32.220009 | orchestrator | Saturday 07 March 2026 01:07:53 +0000 (0:00:00.070) 0:04:42.590 ******** 2026-03-07 01:09:32.220015 | orchestrator | 2026-03-07 01:09:32.220022 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:09:32.220029 | orchestrator | Saturday 07 March 2026 01:07:53 +0000 (0:00:00.331) 0:04:42.922 ******** 2026-03-07 01:09:32.220035 | orchestrator | 2026-03-07 01:09:32.220042 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:09:32.220049 | orchestrator | Saturday 07 March 2026 01:07:53 +0000 (0:00:00.077) 0:04:43.000 ******** 2026-03-07 01:09:32.220055 | orchestrator | 2026-03-07 01:09:32.220062 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:09:32.220068 | orchestrator | Saturday 07 March 2026 01:07:53 +0000 (0:00:00.073) 0:04:43.074 ******** 2026-03-07 01:09:32.220079 | orchestrator | 2026-03-07 01:09:32.220086 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:09:32.220093 | orchestrator | Saturday 07 March 2026 01:07:53 +0000 (0:00:00.078) 0:04:43.152 ******** 2026-03-07 01:09:32.220099 | orchestrator | 2026-03-07 01:09:32.220106 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-07 01:09:32.220113 | orchestrator | Saturday 07 March 2026 01:07:53 +0000 (0:00:00.076) 0:04:43.228 ******** 2026-03-07 01:09:32.220120 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:32.220126 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:09:32.220133 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:09:32.220159 | orchestrator | 2026-03-07 01:09:32.220166 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-07 01:09:32.220173 | orchestrator | Saturday 07 March 2026 01:08:26 +0000 (0:00:32.472) 0:05:15.701 ******** 2026-03-07 01:09:32.220179 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:09:32.220186 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:09:32.220193 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:09:32.220199 | orchestrator | 2026-03-07 01:09:32.220206 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:09:32.220213 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 01:09:32.220221 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-07 01:09:32.220228 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-07 01:09:32.220234 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 01:09:32.220241 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 01:09:32.220247 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 01:09:32.220254 | orchestrator | 2026-03-07 01:09:32.220261 | orchestrator | 2026-03-07 01:09:32.220267 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:09:32.220274 | orchestrator | Saturday 07 March 2026 01:09:30 +0000 (0:01:03.739) 0:06:19.440 ******** 2026-03-07 01:09:32.220281 | orchestrator | =============================================================================== 2026-03-07 01:09:32.220287 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 63.74s 2026-03-07 01:09:32.220294 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 47.95s 2026-03-07 01:09:32.220300 | orchestrator | neutron : Restart neutron-server container ----------------------------- 32.47s 2026-03-07 01:09:32.220307 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 10.03s 2026-03-07 01:09:32.220313 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 9.74s 2026-03-07 01:09:32.220320 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 8.49s 2026-03-07 01:09:32.220327 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 8.13s 2026-03-07 01:09:32.220333 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 7.96s 2026-03-07 01:09:32.220340 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 7.08s 2026-03-07 01:09:32.220347 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 6.98s 2026-03-07 01:09:32.220357 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 6.58s 2026-03-07 01:09:32.220364 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 6.52s 2026-03-07 01:09:32.220376 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 6.51s 2026-03-07 01:09:32.220382 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 6.37s 2026-03-07 01:09:32.220389 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.24s 2026-03-07 01:09:32.220395 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.04s 2026-03-07 01:09:32.220403 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.77s 2026-03-07 01:09:32.220413 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 5.59s 2026-03-07 01:09:32.220420 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.24s 2026-03-07 01:09:32.220427 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 5.04s 2026-03-07 01:09:32.220433 | orchestrator | 2026-03-07 01:09:32 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:09:32.220440 | orchestrator | 2026-03-07 01:09:32 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:32.220447 | orchestrator | 2026-03-07 01:09:32 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:09:32.220454 | orchestrator | 2026-03-07 01:09:32 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:32.220461 | orchestrator | 2026-03-07 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:35.266572 | orchestrator | 2026-03-07 01:09:35 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:09:35.269032 | orchestrator | 2026-03-07 01:09:35 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:35.272173 | orchestrator | 2026-03-07 01:09:35 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:09:35.272837 | orchestrator | 2026-03-07 01:09:35 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:35.272879 | orchestrator | 2026-03-07 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:38.331382 | orchestrator | 2026-03-07 01:09:38 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:09:38.334724 | orchestrator | 2026-03-07 01:09:38 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:38.337677 | orchestrator | 2026-03-07 01:09:38 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:09:38.340087 | orchestrator | 2026-03-07 01:09:38 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:38.340191 | orchestrator | 2026-03-07 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:41.372361 | orchestrator | 2026-03-07 01:09:41 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:09:41.374546 | orchestrator | 2026-03-07 01:09:41 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:41.377413 | orchestrator | 2026-03-07 01:09:41 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:09:41.379572 | orchestrator | 2026-03-07 01:09:41 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:41.379744 | orchestrator | 2026-03-07 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:44.412663 | orchestrator | 2026-03-07 01:09:44 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:09:44.415355 | orchestrator | 2026-03-07 01:09:44 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state STARTED 2026-03-07 01:09:44.417539 | orchestrator | 2026-03-07 01:09:44 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:09:44.418646 | orchestrator | 2026-03-07 01:09:44 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state STARTED 2026-03-07 01:09:44.418716 | orchestrator | 2026-03-07 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:47.464900 | orchestrator | 2026-03-07 01:09:47 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:11:47.568800 | orchestrator | 2026-03-07 01:11:47 | INFO  | Task aef7bb9f-a2c6-494a-9844-2edc32fb92ae is in state SUCCESS 2026-03-07 01:11:47.568873 | orchestrator | 2026-03-07 01:11:47 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:11:47.569512 | orchestrator | 2026-03-07 01:11:47 | INFO  | Task 7d0cbddf-4962-4d44-a1f3-6f8ed978f2f9 is in state SUCCESS 2026-03-07 01:11:47.571540 | orchestrator | 2026-03-07 01:11:47.571569 | orchestrator | 2026-03-07 01:11:47.571574 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-07 01:11:47.571579 | orchestrator | 2026-03-07 01:11:47.571583 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-07 01:11:47.571587 | orchestrator | Saturday 07 March 2026 01:05:57 +0000 (0:00:00.292) 0:00:00.292 ******** 2026-03-07 01:11:47.571591 | orchestrator | changed: [localhost] 2026-03-07 01:11:47.571596 | orchestrator | 2026-03-07 01:11:47.571599 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-07 01:11:47.571604 | orchestrator | Saturday 07 March 2026 01:06:00 +0000 (0:00:03.119) 0:00:03.412 ******** 2026-03-07 01:11:47.571616 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-03-07 01:11:47.571620 | orchestrator | 2026-03-07 01:11:47.571623 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-07 01:11:47.571627 | orchestrator | 2026-03-07 01:11:47.571631 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-07 01:11:47.571635 | orchestrator | 2026-03-07 01:11:47.571639 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-07 01:11:47.571643 | orchestrator | 2026-03-07 01:11:47.571646 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-07 01:11:47.571650 | orchestrator | 2026-03-07 01:11:47.571654 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-07 01:11:47.571658 | orchestrator | changed: [localhost] 2026-03-07 01:11:47.571662 | orchestrator | 2026-03-07 01:11:47.571666 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-07 01:11:47.571670 | orchestrator | Saturday 07 March 2026 01:10:33 +0000 (0:04:32.380) 0:04:35.792 ******** 2026-03-07 01:11:47.571674 | orchestrator | changed: [localhost] 2026-03-07 01:11:47.571678 | orchestrator | 2026-03-07 01:11:47.571681 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:11:47.571685 | orchestrator | 2026-03-07 01:11:47.571689 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:11:47.571693 | orchestrator | Saturday 07 March 2026 01:10:40 +0000 (0:00:06.742) 0:04:42.535 ******** 2026-03-07 01:11:47.571697 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:11:47.571700 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:11:47.571704 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:11:47.571708 | orchestrator | 2026-03-07 01:11:47.571712 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:11:47.571716 | orchestrator | Saturday 07 March 2026 01:10:40 +0000 (0:00:00.554) 0:04:43.090 ******** 2026-03-07 01:11:47.571719 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-07 01:11:47.571723 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-07 01:11:47.571727 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-07 01:11:47.571731 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-07 01:11:47.571745 | orchestrator | 2026-03-07 01:11:47.571749 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-07 01:11:47.571753 | orchestrator | skipping: no hosts matched 2026-03-07 01:11:47.571757 | orchestrator | 2026-03-07 01:11:47.571761 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:11:47.571765 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:11:47.571770 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:11:47.571775 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:11:47.571779 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:11:47.571782 | orchestrator | 2026-03-07 01:11:47.571786 | orchestrator | 2026-03-07 01:11:47.571828 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:11:47.571838 | orchestrator | Saturday 07 March 2026 01:10:41 +0000 (0:00:00.959) 0:04:44.050 ******** 2026-03-07 01:11:47.571845 | orchestrator | =============================================================================== 2026-03-07 01:11:47.571851 | orchestrator | Download ironic-agent initramfs --------------------------------------- 272.38s 2026-03-07 01:11:47.571857 | orchestrator | Download ironic-agent kernel -------------------------------------------- 6.74s 2026-03-07 01:11:47.571863 | orchestrator | Ensure the destination directory exists --------------------------------- 3.12s 2026-03-07 01:11:47.571869 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2026-03-07 01:11:47.571875 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.55s 2026-03-07 01:11:47.571881 | orchestrator | 2026-03-07 01:11:47.571888 | orchestrator | 2026-03-07 01:11:47.571894 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:11:47.571901 | orchestrator | 2026-03-07 01:11:47.571906 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:11:47.571910 | orchestrator | Saturday 07 March 2026 01:07:59 +0000 (0:00:00.496) 0:00:00.496 ******** 2026-03-07 01:11:47.571914 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:11:47.571917 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:11:47.571921 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:11:47.571925 | orchestrator | 2026-03-07 01:11:47.571929 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:11:47.571933 | orchestrator | Saturday 07 March 2026 01:08:00 +0000 (0:00:00.749) 0:00:01.245 ******** 2026-03-07 01:11:47.571936 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-07 01:11:47.571940 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-07 01:11:47.571944 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-07 01:11:47.571948 | orchestrator | 2026-03-07 01:11:47.571960 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-07 01:11:47.571964 | orchestrator | 2026-03-07 01:11:47.571968 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-07 01:11:47.571971 | orchestrator | Saturday 07 March 2026 01:08:01 +0000 (0:00:00.928) 0:00:02.174 ******** 2026-03-07 01:11:47.571975 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:11:47.571979 | orchestrator | 2026-03-07 01:11:47.571983 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-07 01:11:47.571987 | orchestrator | Saturday 07 March 2026 01:08:02 +0000 (0:00:00.844) 0:00:03.018 ******** 2026-03-07 01:11:47.571994 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-07 01:11:47.571998 | orchestrator | 2026-03-07 01:11:47.572002 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-07 01:11:47.572010 | orchestrator | Saturday 07 March 2026 01:08:05 +0000 (0:00:03.372) 0:00:06.391 ******** 2026-03-07 01:11:47.572014 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-07 01:11:47.572018 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-07 01:11:47.572022 | orchestrator | 2026-03-07 01:11:47.572025 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-07 01:11:47.572029 | orchestrator | Saturday 07 March 2026 01:08:12 +0000 (0:00:06.644) 0:00:13.035 ******** 2026-03-07 01:11:47.572033 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:11:47.572037 | orchestrator | 2026-03-07 01:11:47.572040 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-07 01:11:47.572044 | orchestrator | Saturday 07 March 2026 01:08:15 +0000 (0:00:03.496) 0:00:16.532 ******** 2026-03-07 01:11:47.572048 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-07 01:11:47.572052 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:11:47.572055 | orchestrator | 2026-03-07 01:11:47.572059 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-07 01:11:47.572063 | orchestrator | Saturday 07 March 2026 01:08:19 +0000 (0:00:04.024) 0:00:20.556 ******** 2026-03-07 01:11:47.572067 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:11:47.572070 | orchestrator | 2026-03-07 01:11:47.572075 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-07 01:11:47.572079 | orchestrator | Saturday 07 March 2026 01:08:23 +0000 (0:00:03.480) 0:00:24.036 ******** 2026-03-07 01:11:47.572084 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-07 01:11:47.572088 | orchestrator | 2026-03-07 01:11:47.572093 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-07 01:11:47.572099 | orchestrator | Saturday 07 March 2026 01:08:27 +0000 (0:00:04.156) 0:00:28.192 ******** 2026-03-07 01:11:47.572105 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:11:47.572113 | orchestrator | 2026-03-07 01:11:47.572132 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-07 01:11:47.572138 | orchestrator | Saturday 07 March 2026 01:08:31 +0000 (0:00:03.588) 0:00:31.781 ******** 2026-03-07 01:11:47.572144 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:11:47.572150 | orchestrator | 2026-03-07 01:11:47.572156 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-07 01:11:47.572163 | orchestrator | Saturday 07 March 2026 01:08:35 +0000 (0:00:04.049) 0:00:35.830 ******** 2026-03-07 01:11:47.572169 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:11:47.572176 | orchestrator | 2026-03-07 01:11:47.572182 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-07 01:11:47.572188 | orchestrator | Saturday 07 March 2026 01:08:38 +0000 (0:00:03.336) 0:00:39.166 ******** 2026-03-07 01:11:47.572198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572261 | orchestrator | 2026-03-07 01:11:47.572268 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-07 01:11:47.572275 | orchestrator | Saturday 07 March 2026 01:08:40 +0000 (0:00:01.654) 0:00:40.821 ******** 2026-03-07 01:11:47.572282 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:11:47.572289 | orchestrator | 2026-03-07 01:11:47.572296 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-07 01:11:47.572302 | orchestrator | Saturday 07 March 2026 01:08:40 +0000 (0:00:00.162) 0:00:40.983 ******** 2026-03-07 01:11:47.572309 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:11:47.572316 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:11:47.572322 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:11:47.572328 | orchestrator | 2026-03-07 01:11:47.572335 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-07 01:11:47.572341 | orchestrator | Saturday 07 March 2026 01:08:40 +0000 (0:00:00.603) 0:00:41.586 ******** 2026-03-07 01:11:47.572351 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:11:47.572358 | orchestrator | 2026-03-07 01:11:47.572364 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-07 01:11:47.572370 | orchestrator | Saturday 07 March 2026 01:08:41 +0000 (0:00:01.059) 0:00:42.646 ******** 2026-03-07 01:11:47.572380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572434 | orchestrator | 2026-03-07 01:11:47.572440 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-07 01:11:47.572444 | orchestrator | Saturday 07 March 2026 01:08:44 +0000 (0:00:02.472) 0:00:45.118 ******** 2026-03-07 01:11:47.572448 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:11:47.572451 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:11:47.572455 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:11:47.572459 | orchestrator | 2026-03-07 01:11:47.572463 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-07 01:11:47.572467 | orchestrator | Saturday 07 March 2026 01:08:44 +0000 (0:00:00.356) 0:00:45.475 ******** 2026-03-07 01:11:47.572471 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:11:47.572475 | orchestrator | 2026-03-07 01:11:47.572478 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-07 01:11:47.572482 | orchestrator | Saturday 07 March 2026 01:08:45 +0000 (0:00:00.845) 0:00:46.320 ******** 2026-03-07 01:11:47.572486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572521 | orchestrator | 2026-03-07 01:11:47.572525 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-07 01:11:47.572529 | orchestrator | Saturday 07 March 2026 01:08:48 +0000 (0:00:02.668) 0:00:48.989 ******** 2026-03-07 01:11:47.572533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:11:47.572540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:11:47.572544 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:11:47.572550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:11:47.572555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:11:47.572559 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:11:47.572562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:11:47.572572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:11:47.572576 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:11:47.572580 | orchestrator | 2026-03-07 01:11:47.572583 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-07 01:11:47.572587 | orchestrator | Saturday 07 March 2026 01:08:49 +0000 (0:00:00.765) 0:00:49.754 ******** 2026-03-07 01:11:47.572596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:11:47.572601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:11:47.572605 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:11:47.572609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:11:47.572615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:11:47.572619 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:11:47.572623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:11:47.572752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:11:47.572762 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:11:47.572766 | orchestrator | 2026-03-07 01:11:47.572770 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-07 01:11:47.572774 | orchestrator | Saturday 07 March 2026 01:08:50 +0000 (0:00:01.426) 0:00:51.181 ******** 2026-03-07 01:11:47.572778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572813 | orchestrator | 2026-03-07 01:11:47.572816 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-07 01:11:47.572820 | orchestrator | Saturday 07 March 2026 01:08:53 +0000 (0:00:02.779) 0:00:53.961 ******** 2026-03-07 01:11:47.572824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572855 | orchestrator | 2026-03-07 01:11:47.572859 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-07 01:11:47.572863 | orchestrator | Saturday 07 March 2026 01:08:59 +0000 (0:00:06.233) 0:01:00.194 ******** 2026-03-07 01:11:47.572867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:11:47.572874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:11:47.572878 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:11:47.572884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:11:47.572891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:11:47.572895 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:11:47.572899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:11:47.572903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:11:47.572906 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:11:47.572910 | orchestrator | 2026-03-07 01:11:47.572914 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-07 01:11:47.572918 | orchestrator | Saturday 07 March 2026 01:09:00 +0000 (0:00:00.956) 0:01:01.150 ******** 2026-03-07 01:11:47.572926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:11:47.572941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:11:47.572957 | orchestrator | 2026-03-07 01:11:47.572963 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-07 01:11:47.572967 | orchestrator | Saturday 07 March 2026 01:09:03 +0000 (0:00:02.804) 0:01:03.954 ******** 2026-03-07 01:11:47.572971 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:11:47.572974 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:11:47.572978 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:11:47.572982 | orchestrator | 2026-03-07 01:11:47.572986 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-07 01:11:47.572989 | orchestrator | Saturday 07 March 2026 01:09:03 +0000 (0:00:00.409) 0:01:04.364 ******** 2026-03-07 01:11:47.572993 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:11:47.572997 | orchestrator | 2026-03-07 01:11:47.573001 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-07 01:11:47.573004 | orchestrator | Saturday 07 March 2026 01:09:05 +0000 (0:00:02.256) 0:01:06.620 ******** 2026-03-07 01:11:47.573008 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:11:47.573012 | orchestrator | 2026-03-07 01:11:47.573016 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-07 01:11:47.573019 | orchestrator | Saturday 07 March 2026 01:09:08 +0000 (0:00:02.337) 0:01:08.957 ******** 2026-03-07 01:11:47.573023 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:11:47.573027 | orchestrator | 2026-03-07 01:11:47.573031 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-07 01:11:47.573034 | orchestrator | Saturday 07 March 2026 01:09:25 +0000 (0:00:17.126) 0:01:26.084 ******** 2026-03-07 01:11:47.573038 | orchestrator | 2026-03-07 01:11:47.573042 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-07 01:11:47.573046 | orchestrator | Saturday 07 March 2026 01:09:25 +0000 (0:00:00.079) 0:01:26.163 ******** 2026-03-07 01:11:47.573049 | orchestrator | 2026-03-07 01:11:47.573053 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-07 01:11:47.573057 | orchestrator | Saturday 07 March 2026 01:09:25 +0000 (0:00:00.080) 0:01:26.244 ******** 2026-03-07 01:11:47.573060 | orchestrator | 2026-03-07 01:11:47.573064 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-07 01:11:47.573068 | orchestrator | Saturday 07 March 2026 01:09:25 +0000 (0:00:00.078) 0:01:26.322 ******** 2026-03-07 01:11:47.573072 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:11:47.573075 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:11:47.573079 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:11:47.573083 | orchestrator | 2026-03-07 01:11:47.573087 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-07 01:11:47.573090 | orchestrator | Saturday 07 March 2026 01:09:45 +0000 (0:00:20.215) 0:01:46.538 ******** 2026-03-07 01:11:47.573094 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:11:47.573098 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:11:47.573101 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:11:47.573105 | orchestrator | 2026-03-07 01:11:47.573109 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:11:47.573113 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:11:47.573117 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:11:47.573138 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:11:47.573142 | orchestrator | 2026-03-07 01:11:47.573145 | orchestrator | 2026-03-07 01:11:47.573149 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:11:47.573153 | orchestrator | Saturday 07 March 2026 01:10:01 +0000 (0:00:15.193) 0:02:01.731 ******** 2026-03-07 01:11:47.573161 | orchestrator | =============================================================================== 2026-03-07 01:11:47.573165 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.22s 2026-03-07 01:11:47.573169 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.13s 2026-03-07 01:11:47.573172 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.19s 2026-03-07 01:11:47.573176 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.64s 2026-03-07 01:11:47.573180 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.23s 2026-03-07 01:11:47.573183 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.16s 2026-03-07 01:11:47.573187 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.05s 2026-03-07 01:11:47.573191 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.02s 2026-03-07 01:11:47.573195 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.59s 2026-03-07 01:11:47.573198 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.50s 2026-03-07 01:11:47.573204 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.48s 2026-03-07 01:11:47.573210 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.37s 2026-03-07 01:11:47.573217 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.33s 2026-03-07 01:11:47.573226 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.80s 2026-03-07 01:11:47.573235 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.78s 2026-03-07 01:11:47.573241 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.67s 2026-03-07 01:11:47.573248 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.47s 2026-03-07 01:11:47.573257 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.34s 2026-03-07 01:11:47.573264 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.26s 2026-03-07 01:11:47.573270 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.65s 2026-03-07 01:11:47.573277 | orchestrator | 2026-03-07 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:50.610789 | orchestrator | 2026-03-07 01:11:50 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:11:50.612241 | orchestrator | 2026-03-07 01:11:50 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:11:50.613872 | orchestrator | 2026-03-07 01:11:50 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:11:50.615266 | orchestrator | 2026-03-07 01:11:50 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:11:50.615320 | orchestrator | 2026-03-07 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:53.656389 | orchestrator | 2026-03-07 01:11:53 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:11:53.658743 | orchestrator | 2026-03-07 01:11:53 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:11:53.659789 | orchestrator | 2026-03-07 01:11:53 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:11:53.661838 | orchestrator | 2026-03-07 01:11:53 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:11:53.661889 | orchestrator | 2026-03-07 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:56.698513 | orchestrator | 2026-03-07 01:11:56 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:11:56.699848 | orchestrator | 2026-03-07 01:11:56 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:11:56.700940 | orchestrator | 2026-03-07 01:11:56 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:11:56.702471 | orchestrator | 2026-03-07 01:11:56 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:11:56.702514 | orchestrator | 2026-03-07 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:59.738410 | orchestrator | 2026-03-07 01:11:59 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:11:59.739852 | orchestrator | 2026-03-07 01:11:59 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:11:59.741079 | orchestrator | 2026-03-07 01:11:59 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:11:59.743891 | orchestrator | 2026-03-07 01:11:59 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:11:59.743950 | orchestrator | 2026-03-07 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:02.781366 | orchestrator | 2026-03-07 01:12:02 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:02.782314 | orchestrator | 2026-03-07 01:12:02 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:02.783766 | orchestrator | 2026-03-07 01:12:02 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:02.785965 | orchestrator | 2026-03-07 01:12:02 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:02.786048 | orchestrator | 2026-03-07 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:05.821603 | orchestrator | 2026-03-07 01:12:05 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:05.822609 | orchestrator | 2026-03-07 01:12:05 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:05.824356 | orchestrator | 2026-03-07 01:12:05 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:05.826308 | orchestrator | 2026-03-07 01:12:05 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:05.826867 | orchestrator | 2026-03-07 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:08.858192 | orchestrator | 2026-03-07 01:12:08 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:08.858656 | orchestrator | 2026-03-07 01:12:08 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:08.859411 | orchestrator | 2026-03-07 01:12:08 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:08.860843 | orchestrator | 2026-03-07 01:12:08 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:08.861070 | orchestrator | 2026-03-07 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:11.915074 | orchestrator | 2026-03-07 01:12:11 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:11.916322 | orchestrator | 2026-03-07 01:12:11 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:11.917520 | orchestrator | 2026-03-07 01:12:11 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:11.918659 | orchestrator | 2026-03-07 01:12:11 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:11.918693 | orchestrator | 2026-03-07 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:14.948469 | orchestrator | 2026-03-07 01:12:14 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:14.949279 | orchestrator | 2026-03-07 01:12:14 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:14.950043 | orchestrator | 2026-03-07 01:12:14 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:14.950796 | orchestrator | 2026-03-07 01:12:14 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:14.950822 | orchestrator | 2026-03-07 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:17.977415 | orchestrator | 2026-03-07 01:12:17 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:17.978489 | orchestrator | 2026-03-07 01:12:17 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:17.979396 | orchestrator | 2026-03-07 01:12:17 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:17.980041 | orchestrator | 2026-03-07 01:12:17 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:17.980103 | orchestrator | 2026-03-07 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:21.012762 | orchestrator | 2026-03-07 01:12:21 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:21.013397 | orchestrator | 2026-03-07 01:12:21 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:21.015166 | orchestrator | 2026-03-07 01:12:21 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:21.015905 | orchestrator | 2026-03-07 01:12:21 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:21.016056 | orchestrator | 2026-03-07 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:24.063818 | orchestrator | 2026-03-07 01:12:24 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:24.066523 | orchestrator | 2026-03-07 01:12:24 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:24.068631 | orchestrator | 2026-03-07 01:12:24 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:24.070900 | orchestrator | 2026-03-07 01:12:24 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:24.071961 | orchestrator | 2026-03-07 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:27.108043 | orchestrator | 2026-03-07 01:12:27 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:27.108866 | orchestrator | 2026-03-07 01:12:27 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:27.109914 | orchestrator | 2026-03-07 01:12:27 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:27.111165 | orchestrator | 2026-03-07 01:12:27 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:27.111216 | orchestrator | 2026-03-07 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:30.147294 | orchestrator | 2026-03-07 01:12:30 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:30.149860 | orchestrator | 2026-03-07 01:12:30 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:30.153186 | orchestrator | 2026-03-07 01:12:30 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:30.155733 | orchestrator | 2026-03-07 01:12:30 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:30.156151 | orchestrator | 2026-03-07 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:33.210337 | orchestrator | 2026-03-07 01:12:33 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:33.214459 | orchestrator | 2026-03-07 01:12:33 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:33.217105 | orchestrator | 2026-03-07 01:12:33 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:33.219278 | orchestrator | 2026-03-07 01:12:33 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:33.219943 | orchestrator | 2026-03-07 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:36.265768 | orchestrator | 2026-03-07 01:12:36 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:36.267447 | orchestrator | 2026-03-07 01:12:36 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:36.270289 | orchestrator | 2026-03-07 01:12:36 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:36.272512 | orchestrator | 2026-03-07 01:12:36 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:36.272598 | orchestrator | 2026-03-07 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:39.319255 | orchestrator | 2026-03-07 01:12:39 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:39.320173 | orchestrator | 2026-03-07 01:12:39 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:39.321845 | orchestrator | 2026-03-07 01:12:39 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:39.322907 | orchestrator | 2026-03-07 01:12:39 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:39.322966 | orchestrator | 2026-03-07 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:42.368002 | orchestrator | 2026-03-07 01:12:42 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:42.369399 | orchestrator | 2026-03-07 01:12:42 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:42.371185 | orchestrator | 2026-03-07 01:12:42 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:42.371981 | orchestrator | 2026-03-07 01:12:42 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:42.372017 | orchestrator | 2026-03-07 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:45.417297 | orchestrator | 2026-03-07 01:12:45 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:45.419481 | orchestrator | 2026-03-07 01:12:45 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:45.421536 | orchestrator | 2026-03-07 01:12:45 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:45.423473 | orchestrator | 2026-03-07 01:12:45 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:45.423538 | orchestrator | 2026-03-07 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:48.457156 | orchestrator | 2026-03-07 01:12:48 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state STARTED 2026-03-07 01:12:48.459294 | orchestrator | 2026-03-07 01:12:48 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:48.460851 | orchestrator | 2026-03-07 01:12:48 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:48.463039 | orchestrator | 2026-03-07 01:12:48 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:48.463104 | orchestrator | 2026-03-07 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:51.498818 | orchestrator | 2026-03-07 01:12:51.498909 | orchestrator | 2026-03-07 01:12:51.498920 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:12:51.498928 | orchestrator | 2026-03-07 01:12:51.498934 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:12:51.498941 | orchestrator | Saturday 07 March 2026 01:09:28 +0000 (0:00:00.355) 0:00:00.355 ******** 2026-03-07 01:12:51.498948 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:12:51.498955 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:12:51.498959 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:12:51.498963 | orchestrator | 2026-03-07 01:12:51.498967 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:12:51.498985 | orchestrator | Saturday 07 March 2026 01:09:28 +0000 (0:00:00.395) 0:00:00.750 ******** 2026-03-07 01:12:51.498989 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-07 01:12:51.498993 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-07 01:12:51.498997 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-07 01:12:51.499001 | orchestrator | 2026-03-07 01:12:51.499005 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-07 01:12:51.499008 | orchestrator | 2026-03-07 01:12:51.499012 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-07 01:12:51.499016 | orchestrator | Saturday 07 March 2026 01:09:29 +0000 (0:00:00.809) 0:00:01.560 ******** 2026-03-07 01:12:51.499020 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:12:51.499025 | orchestrator | 2026-03-07 01:12:51.499028 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-07 01:12:51.499032 | orchestrator | Saturday 07 March 2026 01:09:30 +0000 (0:00:00.746) 0:00:02.307 ******** 2026-03-07 01:12:51.499036 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-07 01:12:51.499040 | orchestrator | 2026-03-07 01:12:51.499043 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-07 01:12:51.499047 | orchestrator | Saturday 07 March 2026 01:09:34 +0000 (0:00:03.817) 0:00:06.124 ******** 2026-03-07 01:12:51.499052 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-07 01:12:51.499056 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-07 01:12:51.499060 | orchestrator | 2026-03-07 01:12:51.499064 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-07 01:12:51.499070 | orchestrator | Saturday 07 March 2026 01:09:40 +0000 (0:00:06.734) 0:00:12.858 ******** 2026-03-07 01:12:51.499079 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:12:51.499087 | orchestrator | 2026-03-07 01:12:51.499093 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-07 01:12:51.499100 | orchestrator | Saturday 07 March 2026 01:09:44 +0000 (0:00:03.500) 0:00:16.359 ******** 2026-03-07 01:12:51.499161 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-07 01:12:51.499169 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:12:51.499176 | orchestrator | 2026-03-07 01:12:51.499182 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-07 01:12:51.499188 | orchestrator | Saturday 07 March 2026 01:09:48 +0000 (0:00:04.080) 0:00:20.439 ******** 2026-03-07 01:12:51.499194 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:12:51.499278 | orchestrator | 2026-03-07 01:12:51.499288 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-07 01:12:51.499294 | orchestrator | Saturday 07 March 2026 01:09:52 +0000 (0:00:03.817) 0:00:24.257 ******** 2026-03-07 01:12:51.499301 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-07 01:12:51.499308 | orchestrator | 2026-03-07 01:12:51.499316 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-07 01:12:51.499322 | orchestrator | Saturday 07 March 2026 01:09:56 +0000 (0:00:04.072) 0:00:28.329 ******** 2026-03-07 01:12:51.499357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.499371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.499377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.499387 | orchestrator | 2026-03-07 01:12:51.499392 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-07 01:12:51.499397 | orchestrator | Saturday 07 March 2026 01:10:00 +0000 (0:00:03.919) 0:00:32.248 ******** 2026-03-07 01:12:51.499402 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:12:51.499406 | orchestrator | 2026-03-07 01:12:51.499411 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-07 01:12:51.499419 | orchestrator | Saturday 07 March 2026 01:10:01 +0000 (0:00:00.914) 0:00:33.163 ******** 2026-03-07 01:12:51.499423 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:12:51.499428 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:12:51.499433 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:51.499437 | orchestrator | 2026-03-07 01:12:51.499441 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-07 01:12:51.499446 | orchestrator | Saturday 07 March 2026 01:10:05 +0000 (0:00:04.259) 0:00:37.423 ******** 2026-03-07 01:12:51.499450 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:12:51.499459 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:12:51.499464 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:12:51.499468 | orchestrator | 2026-03-07 01:12:51.499473 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-07 01:12:51.499477 | orchestrator | Saturday 07 March 2026 01:10:07 +0000 (0:00:01.849) 0:00:39.272 ******** 2026-03-07 01:12:51.499482 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:12:51.499486 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:12:51.499491 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:12:51.499495 | orchestrator | 2026-03-07 01:12:51.499500 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-07 01:12:51.499504 | orchestrator | Saturday 07 March 2026 01:10:08 +0000 (0:00:01.359) 0:00:40.631 ******** 2026-03-07 01:12:51.499508 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:12:51.499517 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:12:51.499522 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:12:51.499526 | orchestrator | 2026-03-07 01:12:51.499531 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-07 01:12:51.499534 | orchestrator | Saturday 07 March 2026 01:10:09 +0000 (0:00:00.842) 0:00:41.474 ******** 2026-03-07 01:12:51.499538 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.499542 | orchestrator | 2026-03-07 01:12:51.499546 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-07 01:12:51.499549 | orchestrator | Saturday 07 March 2026 01:10:09 +0000 (0:00:00.136) 0:00:41.611 ******** 2026-03-07 01:12:51.499553 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.499557 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:51.499561 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:51.499564 | orchestrator | 2026-03-07 01:12:51.499568 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-07 01:12:51.499572 | orchestrator | Saturday 07 March 2026 01:10:09 +0000 (0:00:00.307) 0:00:41.918 ******** 2026-03-07 01:12:51.499576 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:12:51.499579 | orchestrator | 2026-03-07 01:12:51.499583 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-07 01:12:51.499587 | orchestrator | Saturday 07 March 2026 01:10:10 +0000 (0:00:00.597) 0:00:42.516 ******** 2026-03-07 01:12:51.499591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.499603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.499612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.499616 | orchestrator | 2026-03-07 01:12:51.499620 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-07 01:12:51.499623 | orchestrator | Saturday 07 March 2026 01:10:15 +0000 (0:00:05.370) 0:00:47.886 ******** 2026-03-07 01:12:51.499635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:12:51.499643 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:51.499647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:12:51.499651 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:51.499659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:12:51.499664 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.499671 | orchestrator | 2026-03-07 01:12:51.499677 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-07 01:12:51.499681 | orchestrator | Saturday 07 March 2026 01:10:20 +0000 (0:00:04.616) 0:00:52.503 ******** 2026-03-07 01:12:51.499685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:12:51.499689 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:51.499693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:12:51.499697 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.499711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:12:51.499724 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:51.499731 | orchestrator | 2026-03-07 01:12:51.499737 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-07 01:12:51.499741 | orchestrator | Saturday 07 March 2026 01:10:25 +0000 (0:00:05.433) 0:00:57.937 ******** 2026-03-07 01:12:51.499745 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.499748 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:51.499752 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:51.499756 | orchestrator | 2026-03-07 01:12:51.499760 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-07 01:12:51.499763 | orchestrator | Saturday 07 March 2026 01:10:31 +0000 (0:00:05.562) 0:01:03.499 ******** 2026-03-07 01:12:51.499767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.499782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.499790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.499795 | orchestrator | 2026-03-07 01:12:51.499798 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-07 01:12:51.499802 | orchestrator | Saturday 07 March 2026 01:10:37 +0000 (0:00:06.279) 0:01:09.778 ******** 2026-03-07 01:12:51.499806 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:12:51.499809 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:51.499813 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:12:51.499817 | orchestrator | 2026-03-07 01:12:51.499821 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-07 01:12:51.499824 | orchestrator | Saturday 07 March 2026 01:10:46 +0000 (0:00:08.542) 0:01:18.321 ******** 2026-03-07 01:12:51.499828 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:51.499832 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.499835 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:51.499845 | orchestrator | 2026-03-07 01:12:51.499851 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-07 01:12:51.499858 | orchestrator | Saturday 07 March 2026 01:10:52 +0000 (0:00:06.379) 0:01:24.700 ******** 2026-03-07 01:12:51.499863 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.499869 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:51.499877 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:51.499883 | orchestrator | 2026-03-07 01:12:51.499889 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-07 01:12:51.499895 | orchestrator | Saturday 07 March 2026 01:10:59 +0000 (0:00:06.471) 0:01:31.172 ******** 2026-03-07 01:12:51.499904 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:51.499916 | orchestrator | s2026-03-07 01:12:51 | INFO  | Task c276371f-d5d4-46ec-8488-c3507f3b179e is in state SUCCESS 2026-03-07 01:12:51.500047 | orchestrator | kipping: [testbed-node-1] 2026-03-07 01:12:51.500057 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.500060 | orchestrator | 2026-03-07 01:12:51.500064 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-07 01:12:51.500068 | orchestrator | Saturday 07 March 2026 01:11:06 +0000 (0:00:07.364) 0:01:38.536 ******** 2026-03-07 01:12:51.500072 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.500076 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:51.500083 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:51.500087 | orchestrator | 2026-03-07 01:12:51.500091 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-07 01:12:51.500095 | orchestrator | Saturday 07 March 2026 01:11:12 +0000 (0:00:05.770) 0:01:44.306 ******** 2026-03-07 01:12:51.500099 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.500102 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:51.500129 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:51.500135 | orchestrator | 2026-03-07 01:12:51.500141 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-07 01:12:51.500148 | orchestrator | Saturday 07 March 2026 01:11:12 +0000 (0:00:00.360) 0:01:44.667 ******** 2026-03-07 01:12:51.500152 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-07 01:12:51.500156 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.500160 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-07 01:12:51.500164 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:51.500167 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-07 01:12:51.500171 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:51.500175 | orchestrator | 2026-03-07 01:12:51.500179 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-07 01:12:51.500182 | orchestrator | Saturday 07 March 2026 01:11:16 +0000 (0:00:04.319) 0:01:48.986 ******** 2026-03-07 01:12:51.500186 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:12:51.500190 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:51.500196 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:12:51.500202 | orchestrator | 2026-03-07 01:12:51.500207 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-07 01:12:51.500217 | orchestrator | Saturday 07 March 2026 01:11:22 +0000 (0:00:05.815) 0:01:54.801 ******** 2026-03-07 01:12:51.500226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.500253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.500261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:12:51.500272 | orchestrator | 2026-03-07 01:12:51.500279 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-07 01:12:51.500286 | orchestrator | Saturday 07 March 2026 01:11:26 +0000 (0:00:03.693) 0:01:58.495 ******** 2026-03-07 01:12:51.500290 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:51.500295 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:51.500301 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:51.500307 | orchestrator | 2026-03-07 01:12:51.500312 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-07 01:12:51.500318 | orchestrator | Saturday 07 March 2026 01:11:26 +0000 (0:00:00.338) 0:01:58.833 ******** 2026-03-07 01:12:51.500323 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:51.500329 | orchestrator | 2026-03-07 01:12:51.500335 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-07 01:12:51.500341 | orchestrator | Saturday 07 March 2026 01:11:28 +0000 (0:00:02.135) 0:02:00.969 ******** 2026-03-07 01:12:51.500346 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:51.500351 | orchestrator | 2026-03-07 01:12:51.500357 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-07 01:12:51.500362 | orchestrator | Saturday 07 March 2026 01:11:31 +0000 (0:00:02.492) 0:02:03.462 ******** 2026-03-07 01:12:51.500369 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:51.500414 | orchestrator | 2026-03-07 01:12:51.500419 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-07 01:12:51.500423 | orchestrator | Saturday 07 March 2026 01:11:33 +0000 (0:00:01.982) 0:02:05.444 ******** 2026-03-07 01:12:51.500427 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:51.500430 | orchestrator | 2026-03-07 01:12:51.500434 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-07 01:12:51.500438 | orchestrator | Saturday 07 March 2026 01:12:04 +0000 (0:00:30.648) 0:02:36.092 ******** 2026-03-07 01:12:51.500441 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:51.500445 | orchestrator | 2026-03-07 01:12:51.500449 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-07 01:12:51.500453 | orchestrator | Saturday 07 March 2026 01:12:06 +0000 (0:00:02.291) 0:02:38.384 ******** 2026-03-07 01:12:51.500456 | orchestrator | 2026-03-07 01:12:51.500464 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-07 01:12:51.500468 | orchestrator | Saturday 07 March 2026 01:12:06 +0000 (0:00:00.327) 0:02:38.712 ******** 2026-03-07 01:12:51.500472 | orchestrator | 2026-03-07 01:12:51.500475 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-07 01:12:51.500479 | orchestrator | Saturday 07 March 2026 01:12:06 +0000 (0:00:00.184) 0:02:38.897 ******** 2026-03-07 01:12:51.500483 | orchestrator | 2026-03-07 01:12:51.500492 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-07 01:12:51.500496 | orchestrator | Saturday 07 March 2026 01:12:06 +0000 (0:00:00.094) 0:02:38.992 ******** 2026-03-07 01:12:51.500500 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:51.500503 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:12:51.500507 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:12:51.500511 | orchestrator | 2026-03-07 01:12:51.500515 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:12:51.500520 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:12:51.500525 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-07 01:12:51.500534 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-07 01:12:51.500538 | orchestrator | 2026-03-07 01:12:51.500542 | orchestrator | 2026-03-07 01:12:51.500546 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:12:51.500550 | orchestrator | Saturday 07 March 2026 01:12:50 +0000 (0:00:43.767) 0:03:22.759 ******** 2026-03-07 01:12:51.500553 | orchestrator | =============================================================================== 2026-03-07 01:12:51.500557 | orchestrator | glance : Restart glance-api container ---------------------------------- 43.77s 2026-03-07 01:12:51.500561 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.65s 2026-03-07 01:12:51.500565 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.54s 2026-03-07 01:12:51.500568 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 7.36s 2026-03-07 01:12:51.500572 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.73s 2026-03-07 01:12:51.500576 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.47s 2026-03-07 01:12:51.500580 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.38s 2026-03-07 01:12:51.500583 | orchestrator | glance : Copying over config.json files for services -------------------- 6.28s 2026-03-07 01:12:51.500587 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.82s 2026-03-07 01:12:51.500591 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.77s 2026-03-07 01:12:51.500595 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.56s 2026-03-07 01:12:51.500598 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 5.43s 2026-03-07 01:12:51.500602 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.37s 2026-03-07 01:12:51.500606 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.62s 2026-03-07 01:12:51.500610 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.32s 2026-03-07 01:12:51.500614 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.26s 2026-03-07 01:12:51.500617 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.08s 2026-03-07 01:12:51.500621 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.07s 2026-03-07 01:12:51.500625 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.92s 2026-03-07 01:12:51.500629 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.82s 2026-03-07 01:12:51.500633 | orchestrator | 2026-03-07 01:12:51 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:51.500637 | orchestrator | 2026-03-07 01:12:51 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:51.500674 | orchestrator | 2026-03-07 01:12:51 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:51.500679 | orchestrator | 2026-03-07 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:54.539625 | orchestrator | 2026-03-07 01:12:54 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:54.541519 | orchestrator | 2026-03-07 01:12:54 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:12:54.543222 | orchestrator | 2026-03-07 01:12:54 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:54.545206 | orchestrator | 2026-03-07 01:12:54 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:54.545253 | orchestrator | 2026-03-07 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:57.604127 | orchestrator | 2026-03-07 01:12:57 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:12:57.605474 | orchestrator | 2026-03-07 01:12:57 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:12:57.607217 | orchestrator | 2026-03-07 01:12:57 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:12:57.609392 | orchestrator | 2026-03-07 01:12:57 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:12:57.609443 | orchestrator | 2026-03-07 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:00.645875 | orchestrator | 2026-03-07 01:13:00 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:13:00.646835 | orchestrator | 2026-03-07 01:13:00 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:00.647961 | orchestrator | 2026-03-07 01:13:00 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:13:00.649013 | orchestrator | 2026-03-07 01:13:00 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:00.649047 | orchestrator | 2026-03-07 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:03.688990 | orchestrator | 2026-03-07 01:13:03 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state STARTED 2026-03-07 01:13:03.690372 | orchestrator | 2026-03-07 01:13:03 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:03.691310 | orchestrator | 2026-03-07 01:13:03 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:13:03.692490 | orchestrator | 2026-03-07 01:13:03 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:03.692533 | orchestrator | 2026-03-07 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:06.727601 | orchestrator | 2026-03-07 01:13:06 | INFO  | Task 7e99d8b7-c450-43f4-ae24-b3bace4763d0 is in state SUCCESS 2026-03-07 01:13:06.728790 | orchestrator | 2026-03-07 01:13:06.728871 | orchestrator | 2026-03-07 01:13:06.728887 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:13:06.728898 | orchestrator | 2026-03-07 01:13:06.728907 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:13:06.728918 | orchestrator | Saturday 07 March 2026 01:09:36 +0000 (0:00:00.377) 0:00:00.377 ******** 2026-03-07 01:13:06.728927 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:13:06.728939 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:13:06.728947 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:13:06.728953 | orchestrator | 2026-03-07 01:13:06.728958 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:13:06.728964 | orchestrator | Saturday 07 March 2026 01:09:36 +0000 (0:00:00.337) 0:00:00.715 ******** 2026-03-07 01:13:06.728970 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-07 01:13:06.728976 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-07 01:13:06.728981 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-07 01:13:06.728987 | orchestrator | 2026-03-07 01:13:06.728992 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-07 01:13:06.728998 | orchestrator | 2026-03-07 01:13:06.729003 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-07 01:13:06.729009 | orchestrator | Saturday 07 March 2026 01:09:37 +0000 (0:00:00.505) 0:00:01.220 ******** 2026-03-07 01:13:06.729014 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:13:06.729164 | orchestrator | 2026-03-07 01:13:06.729192 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-07 01:13:06.729198 | orchestrator | Saturday 07 March 2026 01:09:37 +0000 (0:00:00.620) 0:00:01.841 ******** 2026-03-07 01:13:06.729204 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-07 01:13:06.729210 | orchestrator | 2026-03-07 01:13:06.729216 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-07 01:13:06.729223 | orchestrator | Saturday 07 March 2026 01:09:41 +0000 (0:00:03.628) 0:00:05.470 ******** 2026-03-07 01:13:06.729234 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-07 01:13:06.729246 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-07 01:13:06.729260 | orchestrator | 2026-03-07 01:13:06.729270 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-07 01:13:06.729347 | orchestrator | Saturday 07 March 2026 01:09:48 +0000 (0:00:06.787) 0:00:12.258 ******** 2026-03-07 01:13:06.729359 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:13:06.729370 | orchestrator | 2026-03-07 01:13:06.729381 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-07 01:13:06.729392 | orchestrator | Saturday 07 March 2026 01:09:51 +0000 (0:00:03.443) 0:00:15.701 ******** 2026-03-07 01:13:06.729403 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-07 01:13:06.729415 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:13:06.729425 | orchestrator | 2026-03-07 01:13:06.729436 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-07 01:13:06.729443 | orchestrator | Saturday 07 March 2026 01:09:55 +0000 (0:00:04.011) 0:00:19.713 ******** 2026-03-07 01:13:06.729450 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:13:06.729457 | orchestrator | 2026-03-07 01:13:06.729463 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-07 01:13:06.729470 | orchestrator | Saturday 07 March 2026 01:09:59 +0000 (0:00:04.152) 0:00:23.865 ******** 2026-03-07 01:13:06.729476 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-07 01:13:06.729496 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-07 01:13:06.729503 | orchestrator | 2026-03-07 01:13:06.729510 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-07 01:13:06.729516 | orchestrator | Saturday 07 March 2026 01:10:07 +0000 (0:00:07.870) 0:00:31.735 ******** 2026-03-07 01:13:06.729526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.729554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.729569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.729576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.729584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.729595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.729604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.729617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.729628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.729635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.729640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.729650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.729655 | orchestrator | 2026-03-07 01:13:06.729661 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-07 01:13:06.729667 | orchestrator | Saturday 07 March 2026 01:10:09 +0000 (0:00:02.241) 0:00:33.977 ******** 2026-03-07 01:13:06.729672 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:06.729678 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:06.729685 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:06.729694 | orchestrator | 2026-03-07 01:13:06.729707 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-07 01:13:06.729718 | orchestrator | Saturday 07 March 2026 01:10:10 +0000 (0:00:00.283) 0:00:34.260 ******** 2026-03-07 01:13:06.729735 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:13:06.729745 | orchestrator | 2026-03-07 01:13:06.729755 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-07 01:13:06.729764 | orchestrator | Saturday 07 March 2026 01:10:10 +0000 (0:00:00.776) 0:00:35.037 ******** 2026-03-07 01:13:06.729780 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-07 01:13:06.729787 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-07 01:13:06.729793 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-07 01:13:06.729798 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-07 01:13:06.729804 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-07 01:13:06.729809 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-07 01:13:06.729815 | orchestrator | 2026-03-07 01:13:06.729820 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-07 01:13:06.729826 | orchestrator | Saturday 07 March 2026 01:10:13 +0000 (0:00:02.137) 0:00:37.174 ******** 2026-03-07 01:13:06.729833 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:13:06.729840 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:13:06.729852 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:13:06.729858 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:13:06.729874 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:13:06.729902 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:13:06.729909 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:13:06.729921 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:13:06.729927 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:13:06.729990 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:13:06.729999 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:13:06.730004 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:13:06.730010 | orchestrator | 2026-03-07 01:13:06.730067 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-07 01:13:06.730078 | orchestrator | Saturday 07 March 2026 01:10:18 +0000 (0:00:05.109) 0:00:42.283 ******** 2026-03-07 01:13:06.730089 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:13:06.730100 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:13:06.730230 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:13:06.730241 | orchestrator | 2026-03-07 01:13:06.730249 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-07 01:13:06.730262 | orchestrator | Saturday 07 March 2026 01:10:20 +0000 (0:00:02.651) 0:00:44.935 ******** 2026-03-07 01:13:06.730273 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-07 01:13:06.730282 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-07 01:13:06.730292 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-07 01:13:06.730302 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:13:06.730311 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:13:06.730338 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:13:06.730347 | orchestrator | 2026-03-07 01:13:06.730357 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-07 01:13:06.730366 | orchestrator | Saturday 07 March 2026 01:10:24 +0000 (0:00:03.702) 0:00:48.638 ******** 2026-03-07 01:13:06.730376 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-07 01:13:06.730388 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-07 01:13:06.730397 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-07 01:13:06.730409 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-07 01:13:06.730419 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-07 01:13:06.730427 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-07 01:13:06.730436 | orchestrator | 2026-03-07 01:13:06.730444 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-07 01:13:06.730453 | orchestrator | Saturday 07 March 2026 01:10:25 +0000 (0:00:01.337) 0:00:49.975 ******** 2026-03-07 01:13:06.730461 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:06.730470 | orchestrator | 2026-03-07 01:13:06.730479 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-07 01:13:06.730488 | orchestrator | Saturday 07 March 2026 01:10:26 +0000 (0:00:00.240) 0:00:50.215 ******** 2026-03-07 01:13:06.730497 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:06.730506 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:06.730515 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:06.730525 | orchestrator | 2026-03-07 01:13:06.730533 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-07 01:13:06.730542 | orchestrator | Saturday 07 March 2026 01:10:26 +0000 (0:00:00.685) 0:00:50.901 ******** 2026-03-07 01:13:06.730553 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:13:06.730561 | orchestrator | 2026-03-07 01:13:06.730572 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-07 01:13:06.730591 | orchestrator | Saturday 07 March 2026 01:10:27 +0000 (0:00:01.184) 0:00:52.085 ******** 2026-03-07 01:13:06.730603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.730614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.730645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.730656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.730671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.730696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.730706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.730717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.730736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.730753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.730764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.730779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.730788 | orchestrator | 2026-03-07 01:13:06.730796 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-07 01:13:06.730803 | orchestrator | Saturday 07 March 2026 01:10:33 +0000 (0:00:05.559) 0:00:57.645 ******** 2026-03-07 01:13:06.730812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:13:06.730828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.730842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.730851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.730860 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:06.730876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:13:06.730936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.730949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.730966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.730975 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:06.730991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:13:06.731001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731044 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:06.731055 | orchestrator | 2026-03-07 01:13:06.731061 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-07 01:13:06.731067 | orchestrator | Saturday 07 March 2026 01:10:35 +0000 (0:00:01.659) 0:00:59.305 ******** 2026-03-07 01:13:06.731073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:13:06.731083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731137 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:06.731144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:13:06.731156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731180 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:06.731188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:13:06.731199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731224 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:06.731231 | orchestrator | 2026-03-07 01:13:06.731239 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-07 01:13:06.731251 | orchestrator | Saturday 07 March 2026 01:10:37 +0000 (0:00:01.860) 0:01:01.165 ******** 2026-03-07 01:13:06.731271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.731281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.731298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.731317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731393 | orchestrator | 2026-03-07 01:13:06.731402 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-07 01:13:06.731407 | orchestrator | Saturday 07 March 2026 01:10:41 +0000 (0:00:04.901) 0:01:06.067 ******** 2026-03-07 01:13:06.731413 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-07 01:13:06.731419 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-07 01:13:06.731426 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-07 01:13:06.731431 | orchestrator | 2026-03-07 01:13:06.731437 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-07 01:13:06.731443 | orchestrator | Saturday 07 March 2026 01:10:44 +0000 (0:00:02.667) 0:01:08.734 ******** 2026-03-07 01:13:06.731453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.731466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.731472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.731479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.731778 | orchestrator | 2026-03-07 01:13:06.731784 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-07 01:13:06.731790 | orchestrator | Saturday 07 March 2026 01:11:03 +0000 (0:00:18.828) 0:01:27.563 ******** 2026-03-07 01:13:06.731797 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:13:06.731803 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:13:06.731809 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:13:06.731814 | orchestrator | 2026-03-07 01:13:06.731820 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-07 01:13:06.731831 | orchestrator | Saturday 07 March 2026 01:11:06 +0000 (0:00:02.675) 0:01:30.238 ******** 2026-03-07 01:13:06.731837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:13:06.731843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731866 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:06.731872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:13:06.731888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731907 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:06.731913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:13:06.731922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:13:06.731968 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:06.731974 | orchestrator | 2026-03-07 01:13:06.731980 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-07 01:13:06.731985 | orchestrator | Saturday 07 March 2026 01:11:07 +0000 (0:00:01.613) 0:01:31.852 ******** 2026-03-07 01:13:06.731991 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:06.731996 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:06.732002 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:06.732008 | orchestrator | 2026-03-07 01:13:06.732014 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-07 01:13:06.732020 | orchestrator | Saturday 07 March 2026 01:11:08 +0000 (0:00:00.645) 0:01:32.498 ******** 2026-03-07 01:13:06.732025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.732036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.732048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:13:06.732059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.732065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.732071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.732077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.732087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.732097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.732134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.732144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.732152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:13:06.732161 | orchestrator | 2026-03-07 01:13:06.732171 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-07 01:13:06.732180 | orchestrator | Saturday 07 March 2026 01:11:12 +0000 (0:00:04.020) 0:01:36.518 ******** 2026-03-07 01:13:06.732190 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:06.732198 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:06.732207 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:06.732216 | orchestrator | 2026-03-07 01:13:06.732222 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-07 01:13:06.732228 | orchestrator | Saturday 07 March 2026 01:11:13 +0000 (0:00:00.896) 0:01:37.414 ******** 2026-03-07 01:13:06.732241 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:13:06.732247 | orchestrator | 2026-03-07 01:13:06.732252 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-07 01:13:06.732258 | orchestrator | Saturday 07 March 2026 01:11:15 +0000 (0:00:02.366) 0:01:39.781 ******** 2026-03-07 01:13:06.732263 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:13:06.732269 | orchestrator | 2026-03-07 01:13:06.732274 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-07 01:13:06.732280 | orchestrator | Saturday 07 March 2026 01:11:18 +0000 (0:00:02.365) 0:01:42.147 ******** 2026-03-07 01:13:06.732285 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:13:06.732290 | orchestrator | 2026-03-07 01:13:06.732296 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-07 01:13:06.732302 | orchestrator | Saturday 07 March 2026 01:11:38 +0000 (0:00:20.673) 0:02:02.821 ******** 2026-03-07 01:13:06.732309 | orchestrator | 2026-03-07 01:13:06.732315 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-07 01:13:06.732326 | orchestrator | Saturday 07 March 2026 01:11:38 +0000 (0:00:00.077) 0:02:02.899 ******** 2026-03-07 01:13:06.732332 | orchestrator | 2026-03-07 01:13:06.732339 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-07 01:13:06.732345 | orchestrator | Saturday 07 March 2026 01:11:38 +0000 (0:00:00.075) 0:02:02.974 ******** 2026-03-07 01:13:06.732351 | orchestrator | 2026-03-07 01:13:06.732358 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-07 01:13:06.732364 | orchestrator | Saturday 07 March 2026 01:11:38 +0000 (0:00:00.075) 0:02:03.049 ******** 2026-03-07 01:13:06.732370 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:13:06.732376 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:13:06.732383 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:13:06.732389 | orchestrator | 2026-03-07 01:13:06.732396 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-07 01:13:06.732402 | orchestrator | Saturday 07 March 2026 01:12:05 +0000 (0:00:26.533) 0:02:29.583 ******** 2026-03-07 01:13:06.732408 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:13:06.732415 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:13:06.732422 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:13:06.732428 | orchestrator | 2026-03-07 01:13:06.732435 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-07 01:13:06.732441 | orchestrator | Saturday 07 March 2026 01:12:18 +0000 (0:00:12.596) 0:02:42.179 ******** 2026-03-07 01:13:06.732448 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:13:06.732454 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:13:06.732461 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:13:06.732467 | orchestrator | 2026-03-07 01:13:06.732474 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-07 01:13:06.732480 | orchestrator | Saturday 07 March 2026 01:12:50 +0000 (0:00:32.487) 0:03:14.667 ******** 2026-03-07 01:13:06.732486 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:13:06.732493 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:13:06.732499 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:13:06.732505 | orchestrator | 2026-03-07 01:13:06.732512 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-07 01:13:06.732524 | orchestrator | Saturday 07 March 2026 01:13:03 +0000 (0:00:12.634) 0:03:27.301 ******** 2026-03-07 01:13:06.732531 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:06.732538 | orchestrator | 2026-03-07 01:13:06.732544 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:13:06.732552 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-07 01:13:06.732559 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:13:06.732604 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:13:06.732611 | orchestrator | 2026-03-07 01:13:06.732617 | orchestrator | 2026-03-07 01:13:06.732625 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:13:06.732635 | orchestrator | Saturday 07 March 2026 01:13:03 +0000 (0:00:00.440) 0:03:27.742 ******** 2026-03-07 01:13:06.732644 | orchestrator | =============================================================================== 2026-03-07 01:13:06.732656 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 32.49s 2026-03-07 01:13:06.732667 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.53s 2026-03-07 01:13:06.732678 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.67s 2026-03-07 01:13:06.732687 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 18.83s 2026-03-07 01:13:06.732696 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.63s 2026-03-07 01:13:06.732705 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.60s 2026-03-07 01:13:06.732713 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.87s 2026-03-07 01:13:06.732721 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.79s 2026-03-07 01:13:06.732730 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.56s 2026-03-07 01:13:06.732738 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.11s 2026-03-07 01:13:06.732747 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.90s 2026-03-07 01:13:06.732755 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.15s 2026-03-07 01:13:06.732764 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.02s 2026-03-07 01:13:06.732773 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.01s 2026-03-07 01:13:06.732782 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.70s 2026-03-07 01:13:06.732791 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.63s 2026-03-07 01:13:06.732800 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.44s 2026-03-07 01:13:06.732808 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.68s 2026-03-07 01:13:06.732816 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.67s 2026-03-07 01:13:06.732826 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.65s 2026-03-07 01:13:06.732834 | orchestrator | 2026-03-07 01:13:06 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:06.732962 | orchestrator | 2026-03-07 01:13:06 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:13:06.733778 | orchestrator | 2026-03-07 01:13:06 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:06.733812 | orchestrator | 2026-03-07 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:09.778237 | orchestrator | 2026-03-07 01:13:09 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:09.780511 | orchestrator | 2026-03-07 01:13:09 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state STARTED 2026-03-07 01:13:09.783273 | orchestrator | 2026-03-07 01:13:09 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:09.783488 | orchestrator | 2026-03-07 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:12.828301 | orchestrator | 2026-03-07 01:13:12 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:12.831675 | orchestrator | 2026-03-07 01:13:12 | INFO  | Task 60904f09-8872-4075-a69b-77fe3d92047d is in state SUCCESS 2026-03-07 01:13:12.832849 | orchestrator | 2026-03-07 01:13:12.832891 | orchestrator | 2026-03-07 01:13:12.832897 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:13:12.832902 | orchestrator | 2026-03-07 01:13:12.832907 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:13:12.832911 | orchestrator | Saturday 07 March 2026 01:10:52 +0000 (0:00:00.791) 0:00:00.791 ******** 2026-03-07 01:13:12.832916 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:13:12.832921 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:13:12.832925 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:13:12.832929 | orchestrator | 2026-03-07 01:13:12.832936 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:13:12.832942 | orchestrator | Saturday 07 March 2026 01:10:52 +0000 (0:00:00.541) 0:00:01.332 ******** 2026-03-07 01:13:12.832948 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-07 01:13:12.832958 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-07 01:13:12.832966 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-07 01:13:12.832971 | orchestrator | 2026-03-07 01:13:12.832978 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-07 01:13:12.832984 | orchestrator | 2026-03-07 01:13:12.832990 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-07 01:13:12.832997 | orchestrator | Saturday 07 March 2026 01:10:53 +0000 (0:00:00.627) 0:00:01.960 ******** 2026-03-07 01:13:12.833003 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:13:12.833011 | orchestrator | 2026-03-07 01:13:12.833017 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-07 01:13:12.833023 | orchestrator | Saturday 07 March 2026 01:10:54 +0000 (0:00:01.055) 0:00:03.015 ******** 2026-03-07 01:13:12.833069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833150 | orchestrator | 2026-03-07 01:13:12.833156 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-07 01:13:12.833159 | orchestrator | Saturday 07 March 2026 01:10:55 +0000 (0:00:01.478) 0:00:04.500 ******** 2026-03-07 01:13:12.833163 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-07 01:13:12.833169 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-07 01:13:12.833173 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:13:12.833179 | orchestrator | 2026-03-07 01:13:12.833185 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-07 01:13:12.833191 | orchestrator | Saturday 07 March 2026 01:10:57 +0000 (0:00:02.004) 0:00:06.504 ******** 2026-03-07 01:13:12.833197 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:13:12.833203 | orchestrator | 2026-03-07 01:13:12.833309 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-07 01:13:12.833317 | orchestrator | Saturday 07 March 2026 01:10:58 +0000 (0:00:01.136) 0:00:07.641 ******** 2026-03-07 01:13:12.833339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833358 | orchestrator | 2026-03-07 01:13:12.833362 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-07 01:13:12.833365 | orchestrator | Saturday 07 March 2026 01:11:01 +0000 (0:00:02.761) 0:00:10.403 ******** 2026-03-07 01:13:12.833369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:13:12.833378 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:12.833386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:13:12.833390 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:12.833398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:13:12.833402 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:12.833406 | orchestrator | 2026-03-07 01:13:12.833410 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-07 01:13:12.833413 | orchestrator | Saturday 07 March 2026 01:11:02 +0000 (0:00:01.098) 0:00:11.501 ******** 2026-03-07 01:13:12.833417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:13:12.833421 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:12.833425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:13:12.833429 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:12.833432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:13:12.833440 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:12.833444 | orchestrator | 2026-03-07 01:13:12.833447 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-07 01:13:12.833452 | orchestrator | Saturday 07 March 2026 01:11:04 +0000 (0:00:01.887) 0:00:13.388 ******** 2026-03-07 01:13:12.833459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833479 | orchestrator | 2026-03-07 01:13:12.833486 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-07 01:13:12.833492 | orchestrator | Saturday 07 March 2026 01:11:06 +0000 (0:00:02.158) 0:00:15.547 ******** 2026-03-07 01:13:12.833498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.833522 | orchestrator | 2026-03-07 01:13:12.833529 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-07 01:13:12.833535 | orchestrator | Saturday 07 March 2026 01:11:09 +0000 (0:00:02.399) 0:00:17.947 ******** 2026-03-07 01:13:12.833541 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:12.833548 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:12.833554 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:12.833560 | orchestrator | 2026-03-07 01:13:12.833566 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-07 01:13:12.833572 | orchestrator | Saturday 07 March 2026 01:11:09 +0000 (0:00:00.698) 0:00:18.646 ******** 2026-03-07 01:13:12.833581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-07 01:13:12.833588 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-07 01:13:12.833594 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-07 01:13:12.833599 | orchestrator | 2026-03-07 01:13:12.833603 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-07 01:13:12.833607 | orchestrator | Saturday 07 March 2026 01:11:11 +0000 (0:00:01.593) 0:00:20.240 ******** 2026-03-07 01:13:12.833611 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-07 01:13:12.833615 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-07 01:13:12.833619 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-07 01:13:12.833622 | orchestrator | 2026-03-07 01:13:12.833626 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-07 01:13:12.833630 | orchestrator | Saturday 07 March 2026 01:11:13 +0000 (0:00:01.455) 0:00:21.696 ******** 2026-03-07 01:13:12.833637 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:13:12.833641 | orchestrator | 2026-03-07 01:13:12.833644 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-07 01:13:12.833648 | orchestrator | Saturday 07 March 2026 01:11:14 +0000 (0:00:01.096) 0:00:22.793 ******** 2026-03-07 01:13:12.833652 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-07 01:13:12.833656 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-07 01:13:12.833659 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:13:12.833663 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:13:12.833667 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:13:12.833671 | orchestrator | 2026-03-07 01:13:12.833674 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-07 01:13:12.833678 | orchestrator | Saturday 07 March 2026 01:11:15 +0000 (0:00:00.866) 0:00:23.659 ******** 2026-03-07 01:13:12.833682 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:12.833686 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:12.833689 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:12.833693 | orchestrator | 2026-03-07 01:13:12.833697 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-07 01:13:12.833710 | orchestrator | Saturday 07 March 2026 01:11:16 +0000 (0:00:01.058) 0:00:24.718 ******** 2026-03-07 01:13:12.833715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1094097, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5611649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1094097, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5611649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1094097, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5611649, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1094132, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5695016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1094132, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5695016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1094132, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5695016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1094204, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5871181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1094204, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5871181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1094204, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5871181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094125, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5662637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094125, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5662637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094125, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5662637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1094205, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.590919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1094205, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.590919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1094205, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.590919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1094106, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5627308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1094106, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5627308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1094106, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5627308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1094177, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5764415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1094177, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5764415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1094177, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5764415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1094195, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5840068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.833945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1094195, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5840068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1094195, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5840068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094096, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.560111, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094096, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.560111, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094096, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.560111, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094104, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.56217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094104, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.56217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094104, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.56217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094128, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5668936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094128, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5668936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094128, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5668936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1094182, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5805514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1094182, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5805514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1094182, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5805514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1094203, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5854757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1094203, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5854757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1094203, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5854757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094117, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.565603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094117, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.565603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094117, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.565603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1094192, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.582449, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1094192, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.582449, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1094192, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.582449, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1094208, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5920281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1094208, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5920281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1094208, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5920281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1094178, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5792243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1094178, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5792243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1094178, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5792243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1094166, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.575819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1094166, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.575819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1094166, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.575819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1094155, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5734274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1094155, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5734274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1094155, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5734274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1094187, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5819051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1094187, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5819051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1094187, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5819051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1094147, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5721214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1094147, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5721214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1094147, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5721214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1094199, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5854757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1094199, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5854757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1094199, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5854757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1094108, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.563894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1094108, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.563894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1094108, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.563894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094387, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6321716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094387, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6321716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094387, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6321716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094254, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.609202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094254, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.609202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094254, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.609202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094228, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6001835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094228, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6001835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094228, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6001835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1094286, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.612049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1094286, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.612049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1094286, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.612049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094214, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5937605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094214, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5937605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094214, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5937605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094341, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6216035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094341, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6216035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094341, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6216035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094291, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6191201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094291, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6191201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094291, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6191201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1094345, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6230788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1094345, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6230788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1094345, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6230788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094380, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6307468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094380, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6307468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094380, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6307468, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1094337, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6211352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1094337, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6211352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1094337, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6211352, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094282, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6102357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094282, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6102357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094282, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6102357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094249, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6035748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094249, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6035748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094249, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6035748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094281, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.609576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094281, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.609576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094281, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.609576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094239, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6026611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094239, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6026611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094239, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6026611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1094284, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6107948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1094284, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6107948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1094284, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6107948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094367, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6294863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094367, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6294863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094367, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6294863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094355, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.626824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094355, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.626824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094355, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.626824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094217, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.595279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094217, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.595279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094217, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.595279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094224, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5964417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094224, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5964417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094224, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.5964417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094326, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.620616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094326, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.620616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.834993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094326, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.620616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.835000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1094350, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6247897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.835007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1094350, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6247897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.835012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1094350, 'dev': 122, 'nlink': 1, 'atime': 1772841745.0, 'mtime': 1772841745.0, 'ctime': 1772842602.6247897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:13:12.835016 | orchestrator | 2026-03-07 01:13:12.835021 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-07 01:13:12.835026 | orchestrator | Saturday 07 March 2026 01:11:57 +0000 (0:00:41.756) 0:01:06.475 ******** 2026-03-07 01:13:12.835035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.835040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.835046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:13:12.835053 | orchestrator | 2026-03-07 01:13:12.835059 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-07 01:13:12.835066 | orchestrator | Saturday 07 March 2026 01:11:58 +0000 (0:00:01.093) 0:01:07.569 ******** 2026-03-07 01:13:12.835076 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:13:12.835081 | orchestrator | 2026-03-07 01:13:12.835087 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-07 01:13:12.835093 | orchestrator | Saturday 07 March 2026 01:12:01 +0000 (0:00:02.540) 0:01:10.109 ******** 2026-03-07 01:13:12.835100 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:13:12.835160 | orchestrator | 2026-03-07 01:13:12.835167 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-07 01:13:12.835173 | orchestrator | Saturday 07 March 2026 01:12:03 +0000 (0:00:02.326) 0:01:12.435 ******** 2026-03-07 01:13:12.835179 | orchestrator | 2026-03-07 01:13:12.835184 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-07 01:13:12.835190 | orchestrator | Saturday 07 March 2026 01:12:03 +0000 (0:00:00.094) 0:01:12.529 ******** 2026-03-07 01:13:12.835195 | orchestrator | 2026-03-07 01:13:12.835201 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-07 01:13:12.835206 | orchestrator | Saturday 07 March 2026 01:12:04 +0000 (0:00:00.297) 0:01:12.827 ******** 2026-03-07 01:13:12.835212 | orchestrator | 2026-03-07 01:13:12.835218 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-07 01:13:12.835224 | orchestrator | Saturday 07 March 2026 01:12:04 +0000 (0:00:00.083) 0:01:12.910 ******** 2026-03-07 01:13:12.835230 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:12.835236 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:12.835246 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:13:12.835253 | orchestrator | 2026-03-07 01:13:12.835259 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-07 01:13:12.835274 | orchestrator | Saturday 07 March 2026 01:12:06 +0000 (0:00:01.863) 0:01:14.773 ******** 2026-03-07 01:13:12.835280 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:12.835286 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:12.835292 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-07 01:13:12.835299 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-07 01:13:12.835304 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:13:12.835311 | orchestrator | 2026-03-07 01:13:12.835316 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-07 01:13:12.835322 | orchestrator | Saturday 07 March 2026 01:12:33 +0000 (0:00:27.383) 0:01:42.156 ******** 2026-03-07 01:13:12.835328 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:12.835334 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:13:12.835340 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:13:12.835346 | orchestrator | 2026-03-07 01:13:12.835353 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-07 01:13:12.835358 | orchestrator | Saturday 07 March 2026 01:13:03 +0000 (0:00:30.062) 0:02:12.219 ******** 2026-03-07 01:13:12.835364 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:13:12.835370 | orchestrator | 2026-03-07 01:13:12.835376 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-07 01:13:12.835382 | orchestrator | Saturday 07 March 2026 01:13:06 +0000 (0:00:02.471) 0:02:14.691 ******** 2026-03-07 01:13:12.835388 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:12.835394 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:13:12.835399 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:13:12.835405 | orchestrator | 2026-03-07 01:13:12.835411 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-07 01:13:12.835417 | orchestrator | Saturday 07 March 2026 01:13:06 +0000 (0:00:00.599) 0:02:15.291 ******** 2026-03-07 01:13:12.835425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-07 01:13:12.835434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-07 01:13:12.835441 | orchestrator | 2026-03-07 01:13:12.835447 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-07 01:13:12.835454 | orchestrator | Saturday 07 March 2026 01:13:09 +0000 (0:00:02.729) 0:02:18.021 ******** 2026-03-07 01:13:12.835458 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:13:12.835462 | orchestrator | 2026-03-07 01:13:12.835466 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:13:12.835470 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:13:12.835475 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:13:12.835478 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:13:12.835482 | orchestrator | 2026-03-07 01:13:12.835486 | orchestrator | 2026-03-07 01:13:12.835490 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:13:12.835493 | orchestrator | Saturday 07 March 2026 01:13:09 +0000 (0:00:00.290) 0:02:18.311 ******** 2026-03-07 01:13:12.835502 | orchestrator | =============================================================================== 2026-03-07 01:13:12.835510 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 41.76s 2026-03-07 01:13:12.835514 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.06s 2026-03-07 01:13:12.835518 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.38s 2026-03-07 01:13:12.835522 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 2.76s 2026-03-07 01:13:12.835525 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.73s 2026-03-07 01:13:12.835529 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.54s 2026-03-07 01:13:12.835533 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.47s 2026-03-07 01:13:12.835536 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 2.40s 2026-03-07 01:13:12.835540 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.33s 2026-03-07 01:13:12.835543 | orchestrator | grafana : Copying over config.json files -------------------------------- 2.16s 2026-03-07 01:13:12.835547 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 2.01s 2026-03-07 01:13:12.835551 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.89s 2026-03-07 01:13:12.835559 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.86s 2026-03-07 01:13:12.835563 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.59s 2026-03-07 01:13:12.835566 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.48s 2026-03-07 01:13:12.835570 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.46s 2026-03-07 01:13:12.835574 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.14s 2026-03-07 01:13:12.835577 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 1.10s 2026-03-07 01:13:12.835581 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.10s 2026-03-07 01:13:12.835585 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.09s 2026-03-07 01:13:12.835589 | orchestrator | 2026-03-07 01:13:12 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:12.835593 | orchestrator | 2026-03-07 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:15.873477 | orchestrator | 2026-03-07 01:13:15 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:15.874897 | orchestrator | 2026-03-07 01:13:15 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:15.875059 | orchestrator | 2026-03-07 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:18.912069 | orchestrator | 2026-03-07 01:13:18 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:18.913249 | orchestrator | 2026-03-07 01:13:18 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:18.913283 | orchestrator | 2026-03-07 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:21.952757 | orchestrator | 2026-03-07 01:13:21 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:21.954920 | orchestrator | 2026-03-07 01:13:21 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:21.955005 | orchestrator | 2026-03-07 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:24.989750 | orchestrator | 2026-03-07 01:13:24 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:24.991896 | orchestrator | 2026-03-07 01:13:24 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:24.991985 | orchestrator | 2026-03-07 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:28.041715 | orchestrator | 2026-03-07 01:13:28 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:28.042911 | orchestrator | 2026-03-07 01:13:28 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:28.042951 | orchestrator | 2026-03-07 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:31.086918 | orchestrator | 2026-03-07 01:13:31 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:31.088288 | orchestrator | 2026-03-07 01:13:31 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:31.088360 | orchestrator | 2026-03-07 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:34.121852 | orchestrator | 2026-03-07 01:13:34 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:34.122707 | orchestrator | 2026-03-07 01:13:34 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:34.122749 | orchestrator | 2026-03-07 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:37.155225 | orchestrator | 2026-03-07 01:13:37 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:37.156806 | orchestrator | 2026-03-07 01:13:37 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:37.157123 | orchestrator | 2026-03-07 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:40.196217 | orchestrator | 2026-03-07 01:13:40 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:40.197381 | orchestrator | 2026-03-07 01:13:40 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:40.197432 | orchestrator | 2026-03-07 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:43.228670 | orchestrator | 2026-03-07 01:13:43 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:43.229612 | orchestrator | 2026-03-07 01:13:43 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:43.229633 | orchestrator | 2026-03-07 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:46.289708 | orchestrator | 2026-03-07 01:13:46 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:46.293274 | orchestrator | 2026-03-07 01:13:46 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:46.293670 | orchestrator | 2026-03-07 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:49.341241 | orchestrator | 2026-03-07 01:13:49 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:49.342203 | orchestrator | 2026-03-07 01:13:49 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:49.342617 | orchestrator | 2026-03-07 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:52.389986 | orchestrator | 2026-03-07 01:13:52 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:52.391156 | orchestrator | 2026-03-07 01:13:52 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:52.391221 | orchestrator | 2026-03-07 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:55.441854 | orchestrator | 2026-03-07 01:13:55 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:55.442878 | orchestrator | 2026-03-07 01:13:55 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:55.443920 | orchestrator | 2026-03-07 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:13:58.486621 | orchestrator | 2026-03-07 01:13:58 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:13:58.488493 | orchestrator | 2026-03-07 01:13:58 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:13:58.488647 | orchestrator | 2026-03-07 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:01.536227 | orchestrator | 2026-03-07 01:14:01 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:14:01.539916 | orchestrator | 2026-03-07 01:14:01 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:01.539988 | orchestrator | 2026-03-07 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:04.583438 | orchestrator | 2026-03-07 01:14:04 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:14:04.583645 | orchestrator | 2026-03-07 01:14:04 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:04.583667 | orchestrator | 2026-03-07 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:07.626938 | orchestrator | 2026-03-07 01:14:07 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:14:07.628689 | orchestrator | 2026-03-07 01:14:07 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:07.628938 | orchestrator | 2026-03-07 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:10.669267 | orchestrator | 2026-03-07 01:14:10 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:14:10.670913 | orchestrator | 2026-03-07 01:14:10 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:10.671072 | orchestrator | 2026-03-07 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:13.718110 | orchestrator | 2026-03-07 01:14:13 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:14:13.719154 | orchestrator | 2026-03-07 01:14:13 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:13.719193 | orchestrator | 2026-03-07 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:16.763614 | orchestrator | 2026-03-07 01:14:16 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:14:16.765949 | orchestrator | 2026-03-07 01:14:16 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:16.766064 | orchestrator | 2026-03-07 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:19.802394 | orchestrator | 2026-03-07 01:14:19 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:14:19.803576 | orchestrator | 2026-03-07 01:14:19 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:19.803635 | orchestrator | 2026-03-07 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:22.838745 | orchestrator | 2026-03-07 01:14:22 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:14:22.839335 | orchestrator | 2026-03-07 01:14:22 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:22.839359 | orchestrator | 2026-03-07 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:25.875876 | orchestrator | 2026-03-07 01:14:25 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:14:25.877993 | orchestrator | 2026-03-07 01:14:25 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:25.878104 | orchestrator | 2026-03-07 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:28.926129 | orchestrator | 2026-03-07 01:14:28 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:14:28.927460 | orchestrator | 2026-03-07 01:14:28 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:28.927557 | orchestrator | 2026-03-07 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:31.968884 | orchestrator | 2026-03-07 01:14:31 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state STARTED 2026-03-07 01:14:31.970330 | orchestrator | 2026-03-07 01:14:31 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:31.970366 | orchestrator | 2026-03-07 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:35.007263 | orchestrator | 2026-03-07 01:14:35 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:14:35.007366 | orchestrator | 2026-03-07 01:14:35 | INFO  | Task 7d4209a7-820a-4086-b232-bf3100c4dbba is in state SUCCESS 2026-03-07 01:14:35.008134 | orchestrator | 2026-03-07 01:14:35 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:35.008203 | orchestrator | 2026-03-07 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:38.051188 | orchestrator | 2026-03-07 01:14:38 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:14:38.054432 | orchestrator | 2026-03-07 01:14:38 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:38.054518 | orchestrator | 2026-03-07 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:41.101353 | orchestrator | 2026-03-07 01:14:41 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:14:41.104314 | orchestrator | 2026-03-07 01:14:41 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:41.104396 | orchestrator | 2026-03-07 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:44.141157 | orchestrator | 2026-03-07 01:14:44 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:14:44.143592 | orchestrator | 2026-03-07 01:14:44 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:44.143652 | orchestrator | 2026-03-07 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:47.218149 | orchestrator | 2026-03-07 01:14:47 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:14:47.222373 | orchestrator | 2026-03-07 01:14:47 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:47.222463 | orchestrator | 2026-03-07 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:50.262831 | orchestrator | 2026-03-07 01:14:50 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:14:50.267495 | orchestrator | 2026-03-07 01:14:50 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:50.267685 | orchestrator | 2026-03-07 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:53.308645 | orchestrator | 2026-03-07 01:14:53 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:14:53.308803 | orchestrator | 2026-03-07 01:14:53 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:53.308822 | orchestrator | 2026-03-07 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:56.355945 | orchestrator | 2026-03-07 01:14:56 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:14:56.358232 | orchestrator | 2026-03-07 01:14:56 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:56.358315 | orchestrator | 2026-03-07 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:59.422290 | orchestrator | 2026-03-07 01:14:59 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:14:59.422457 | orchestrator | 2026-03-07 01:14:59 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:14:59.422472 | orchestrator | 2026-03-07 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:02.481450 | orchestrator | 2026-03-07 01:15:02 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:02.482363 | orchestrator | 2026-03-07 01:15:02 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:02.482408 | orchestrator | 2026-03-07 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:05.530322 | orchestrator | 2026-03-07 01:15:05 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:05.532171 | orchestrator | 2026-03-07 01:15:05 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:05.532251 | orchestrator | 2026-03-07 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:08.560627 | orchestrator | 2026-03-07 01:15:08 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:08.561318 | orchestrator | 2026-03-07 01:15:08 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:08.561349 | orchestrator | 2026-03-07 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:11.592831 | orchestrator | 2026-03-07 01:15:11 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:11.593608 | orchestrator | 2026-03-07 01:15:11 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:11.593639 | orchestrator | 2026-03-07 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:14.629355 | orchestrator | 2026-03-07 01:15:14 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:14.629431 | orchestrator | 2026-03-07 01:15:14 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:14.629437 | orchestrator | 2026-03-07 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:17.660235 | orchestrator | 2026-03-07 01:15:17 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:17.662280 | orchestrator | 2026-03-07 01:15:17 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:17.662676 | orchestrator | 2026-03-07 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:20.706078 | orchestrator | 2026-03-07 01:15:20 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:20.708717 | orchestrator | 2026-03-07 01:15:20 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:20.708948 | orchestrator | 2026-03-07 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:23.742918 | orchestrator | 2026-03-07 01:15:23 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:23.743065 | orchestrator | 2026-03-07 01:15:23 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:23.743082 | orchestrator | 2026-03-07 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:26.782536 | orchestrator | 2026-03-07 01:15:26 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:26.784117 | orchestrator | 2026-03-07 01:15:26 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:26.784178 | orchestrator | 2026-03-07 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:29.811720 | orchestrator | 2026-03-07 01:15:29 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:29.812312 | orchestrator | 2026-03-07 01:15:29 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:29.812357 | orchestrator | 2026-03-07 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:32.847268 | orchestrator | 2026-03-07 01:15:32 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:32.848730 | orchestrator | 2026-03-07 01:15:32 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:32.848776 | orchestrator | 2026-03-07 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:35.891307 | orchestrator | 2026-03-07 01:15:35 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:35.891407 | orchestrator | 2026-03-07 01:15:35 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:35.891414 | orchestrator | 2026-03-07 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:38.931291 | orchestrator | 2026-03-07 01:15:38 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:38.931522 | orchestrator | 2026-03-07 01:15:38 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:38.931545 | orchestrator | 2026-03-07 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:41.968532 | orchestrator | 2026-03-07 01:15:41 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:41.970215 | orchestrator | 2026-03-07 01:15:41 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:41.970311 | orchestrator | 2026-03-07 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:45.012448 | orchestrator | 2026-03-07 01:15:45 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:45.013285 | orchestrator | 2026-03-07 01:15:45 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:45.013330 | orchestrator | 2026-03-07 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:48.083888 | orchestrator | 2026-03-07 01:15:48 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:48.084965 | orchestrator | 2026-03-07 01:15:48 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:48.085011 | orchestrator | 2026-03-07 01:15:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:51.117852 | orchestrator | 2026-03-07 01:15:51 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:51.118803 | orchestrator | 2026-03-07 01:15:51 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:51.119739 | orchestrator | 2026-03-07 01:15:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:54.169538 | orchestrator | 2026-03-07 01:15:54 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:54.172791 | orchestrator | 2026-03-07 01:15:54 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:54.172876 | orchestrator | 2026-03-07 01:15:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:57.226326 | orchestrator | 2026-03-07 01:15:57 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:15:57.229787 | orchestrator | 2026-03-07 01:15:57 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:15:57.229964 | orchestrator | 2026-03-07 01:15:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:00.274629 | orchestrator | 2026-03-07 01:16:00 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:00.275109 | orchestrator | 2026-03-07 01:16:00 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:00.275161 | orchestrator | 2026-03-07 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:03.314939 | orchestrator | 2026-03-07 01:16:03 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:03.316103 | orchestrator | 2026-03-07 01:16:03 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:03.316157 | orchestrator | 2026-03-07 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:06.354127 | orchestrator | 2026-03-07 01:16:06 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:06.355187 | orchestrator | 2026-03-07 01:16:06 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:06.355224 | orchestrator | 2026-03-07 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:09.390937 | orchestrator | 2026-03-07 01:16:09 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:09.391084 | orchestrator | 2026-03-07 01:16:09 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:09.391100 | orchestrator | 2026-03-07 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:12.446177 | orchestrator | 2026-03-07 01:16:12 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:12.447052 | orchestrator | 2026-03-07 01:16:12 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:12.447113 | orchestrator | 2026-03-07 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:15.496258 | orchestrator | 2026-03-07 01:16:15 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:15.497104 | orchestrator | 2026-03-07 01:16:15 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:15.497133 | orchestrator | 2026-03-07 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:18.547089 | orchestrator | 2026-03-07 01:16:18 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:18.549134 | orchestrator | 2026-03-07 01:16:18 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:18.549210 | orchestrator | 2026-03-07 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:21.583811 | orchestrator | 2026-03-07 01:16:21 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:21.584771 | orchestrator | 2026-03-07 01:16:21 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:21.584829 | orchestrator | 2026-03-07 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:24.647046 | orchestrator | 2026-03-07 01:16:24 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:24.647162 | orchestrator | 2026-03-07 01:16:24 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:24.647182 | orchestrator | 2026-03-07 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:27.693097 | orchestrator | 2026-03-07 01:16:27 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:27.694965 | orchestrator | 2026-03-07 01:16:27 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:27.695013 | orchestrator | 2026-03-07 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:30.741318 | orchestrator | 2026-03-07 01:16:30 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:30.744104 | orchestrator | 2026-03-07 01:16:30 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:30.744259 | orchestrator | 2026-03-07 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:33.793068 | orchestrator | 2026-03-07 01:16:33 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:33.795099 | orchestrator | 2026-03-07 01:16:33 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:33.795159 | orchestrator | 2026-03-07 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:36.842928 | orchestrator | 2026-03-07 01:16:36 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:36.844740 | orchestrator | 2026-03-07 01:16:36 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:36.844885 | orchestrator | 2026-03-07 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:39.903312 | orchestrator | 2026-03-07 01:16:39 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:39.904798 | orchestrator | 2026-03-07 01:16:39 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:39.904842 | orchestrator | 2026-03-07 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:42.959491 | orchestrator | 2026-03-07 01:16:42 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:42.960541 | orchestrator | 2026-03-07 01:16:42 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:42.960594 | orchestrator | 2026-03-07 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:46.003450 | orchestrator | 2026-03-07 01:16:45 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:46.006698 | orchestrator | 2026-03-07 01:16:46 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:46.007319 | orchestrator | 2026-03-07 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:49.076959 | orchestrator | 2026-03-07 01:16:49 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:49.079999 | orchestrator | 2026-03-07 01:16:49 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:49.080070 | orchestrator | 2026-03-07 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:52.120835 | orchestrator | 2026-03-07 01:16:52 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:52.122510 | orchestrator | 2026-03-07 01:16:52 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:52.122546 | orchestrator | 2026-03-07 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:55.162781 | orchestrator | 2026-03-07 01:16:55 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:55.164743 | orchestrator | 2026-03-07 01:16:55 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:55.164796 | orchestrator | 2026-03-07 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:58.206885 | orchestrator | 2026-03-07 01:16:58 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:16:58.208384 | orchestrator | 2026-03-07 01:16:58 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:16:58.208706 | orchestrator | 2026-03-07 01:16:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:01.253656 | orchestrator | 2026-03-07 01:17:01 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:01.257163 | orchestrator | 2026-03-07 01:17:01 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:01.257283 | orchestrator | 2026-03-07 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:04.308186 | orchestrator | 2026-03-07 01:17:04 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:04.312827 | orchestrator | 2026-03-07 01:17:04 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:04.313523 | orchestrator | 2026-03-07 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:07.376804 | orchestrator | 2026-03-07 01:17:07 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:07.377736 | orchestrator | 2026-03-07 01:17:07 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:07.377836 | orchestrator | 2026-03-07 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:10.427865 | orchestrator | 2026-03-07 01:17:10 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:10.428367 | orchestrator | 2026-03-07 01:17:10 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:10.428399 | orchestrator | 2026-03-07 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:13.480687 | orchestrator | 2026-03-07 01:17:13 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:13.482330 | orchestrator | 2026-03-07 01:17:13 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:13.482372 | orchestrator | 2026-03-07 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:16.536399 | orchestrator | 2026-03-07 01:17:16 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:16.538218 | orchestrator | 2026-03-07 01:17:16 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:16.538269 | orchestrator | 2026-03-07 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:19.577820 | orchestrator | 2026-03-07 01:17:19 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:19.578486 | orchestrator | 2026-03-07 01:17:19 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:19.578596 | orchestrator | 2026-03-07 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:22.614936 | orchestrator | 2026-03-07 01:17:22 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:22.618435 | orchestrator | 2026-03-07 01:17:22 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:22.618527 | orchestrator | 2026-03-07 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:25.654830 | orchestrator | 2026-03-07 01:17:25 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:25.657297 | orchestrator | 2026-03-07 01:17:25 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:25.657365 | orchestrator | 2026-03-07 01:17:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:28.708327 | orchestrator | 2026-03-07 01:17:28 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:28.709500 | orchestrator | 2026-03-07 01:17:28 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:28.709594 | orchestrator | 2026-03-07 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:31.748179 | orchestrator | 2026-03-07 01:17:31 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:31.749784 | orchestrator | 2026-03-07 01:17:31 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:31.749844 | orchestrator | 2026-03-07 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:34.788079 | orchestrator | 2026-03-07 01:17:34 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:34.788850 | orchestrator | 2026-03-07 01:17:34 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:34.788892 | orchestrator | 2026-03-07 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:37.831306 | orchestrator | 2026-03-07 01:17:37 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:37.832405 | orchestrator | 2026-03-07 01:17:37 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:37.832453 | orchestrator | 2026-03-07 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:40.875939 | orchestrator | 2026-03-07 01:17:40 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:40.876596 | orchestrator | 2026-03-07 01:17:40 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:40.876643 | orchestrator | 2026-03-07 01:17:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:43.922096 | orchestrator | 2026-03-07 01:17:43 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:43.923385 | orchestrator | 2026-03-07 01:17:43 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:43.923438 | orchestrator | 2026-03-07 01:17:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:46.967319 | orchestrator | 2026-03-07 01:17:46 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:46.969936 | orchestrator | 2026-03-07 01:17:46 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:46.970065 | orchestrator | 2026-03-07 01:17:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:50.025791 | orchestrator | 2026-03-07 01:17:50 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:50.026847 | orchestrator | 2026-03-07 01:17:50 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:50.027449 | orchestrator | 2026-03-07 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:53.082317 | orchestrator | 2026-03-07 01:17:53 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:53.083446 | orchestrator | 2026-03-07 01:17:53 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:53.084014 | orchestrator | 2026-03-07 01:17:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:56.136815 | orchestrator | 2026-03-07 01:17:56 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:56.139897 | orchestrator | 2026-03-07 01:17:56 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:56.139972 | orchestrator | 2026-03-07 01:17:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:59.180564 | orchestrator | 2026-03-07 01:17:59 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:17:59.180658 | orchestrator | 2026-03-07 01:17:59 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:17:59.180666 | orchestrator | 2026-03-07 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:02.225858 | orchestrator | 2026-03-07 01:18:02 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:02.226833 | orchestrator | 2026-03-07 01:18:02 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:02.226867 | orchestrator | 2026-03-07 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:05.268759 | orchestrator | 2026-03-07 01:18:05 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:05.269563 | orchestrator | 2026-03-07 01:18:05 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:05.269603 | orchestrator | 2026-03-07 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:08.304244 | orchestrator | 2026-03-07 01:18:08 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:08.305270 | orchestrator | 2026-03-07 01:18:08 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:08.305304 | orchestrator | 2026-03-07 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:11.354424 | orchestrator | 2026-03-07 01:18:11 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:11.355851 | orchestrator | 2026-03-07 01:18:11 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:11.355896 | orchestrator | 2026-03-07 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:14.401952 | orchestrator | 2026-03-07 01:18:14 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:14.402106 | orchestrator | 2026-03-07 01:18:14 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:14.402125 | orchestrator | 2026-03-07 01:18:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:17.450411 | orchestrator | 2026-03-07 01:18:17 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:17.452400 | orchestrator | 2026-03-07 01:18:17 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:17.453744 | orchestrator | 2026-03-07 01:18:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:20.502914 | orchestrator | 2026-03-07 01:18:20 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:20.505157 | orchestrator | 2026-03-07 01:18:20 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:20.505951 | orchestrator | 2026-03-07 01:18:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:23.548695 | orchestrator | 2026-03-07 01:18:23 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:23.550396 | orchestrator | 2026-03-07 01:18:23 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:23.550467 | orchestrator | 2026-03-07 01:18:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:26.587103 | orchestrator | 2026-03-07 01:18:26 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:26.587767 | orchestrator | 2026-03-07 01:18:26 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:26.587934 | orchestrator | 2026-03-07 01:18:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:29.691822 | orchestrator | 2026-03-07 01:18:29 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:29.694343 | orchestrator | 2026-03-07 01:18:29 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:29.694447 | orchestrator | 2026-03-07 01:18:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:32.740026 | orchestrator | 2026-03-07 01:18:32 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:32.740247 | orchestrator | 2026-03-07 01:18:32 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:32.740271 | orchestrator | 2026-03-07 01:18:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:35.778758 | orchestrator | 2026-03-07 01:18:35 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:35.781581 | orchestrator | 2026-03-07 01:18:35 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:35.781666 | orchestrator | 2026-03-07 01:18:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:38.815596 | orchestrator | 2026-03-07 01:18:38 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:38.816844 | orchestrator | 2026-03-07 01:18:38 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:38.816894 | orchestrator | 2026-03-07 01:18:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:41.868970 | orchestrator | 2026-03-07 01:18:41 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:41.869344 | orchestrator | 2026-03-07 01:18:41 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:41.869375 | orchestrator | 2026-03-07 01:18:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:44.911974 | orchestrator | 2026-03-07 01:18:44 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:44.915064 | orchestrator | 2026-03-07 01:18:44 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:44.915146 | orchestrator | 2026-03-07 01:18:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:47.971111 | orchestrator | 2026-03-07 01:18:47 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:47.973419 | orchestrator | 2026-03-07 01:18:47 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:47.973594 | orchestrator | 2026-03-07 01:18:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:51.017184 | orchestrator | 2026-03-07 01:18:51 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:51.017945 | orchestrator | 2026-03-07 01:18:51 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:51.017986 | orchestrator | 2026-03-07 01:18:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:54.060682 | orchestrator | 2026-03-07 01:18:54 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:54.062582 | orchestrator | 2026-03-07 01:18:54 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:54.062650 | orchestrator | 2026-03-07 01:18:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:57.102699 | orchestrator | 2026-03-07 01:18:57 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:18:57.104355 | orchestrator | 2026-03-07 01:18:57 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:18:57.104447 | orchestrator | 2026-03-07 01:18:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:00.140831 | orchestrator | 2026-03-07 01:19:00 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:00.143683 | orchestrator | 2026-03-07 01:19:00 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:19:00.143770 | orchestrator | 2026-03-07 01:19:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:03.193323 | orchestrator | 2026-03-07 01:19:03 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:03.194010 | orchestrator | 2026-03-07 01:19:03 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:19:03.194046 | orchestrator | 2026-03-07 01:19:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:06.253128 | orchestrator | 2026-03-07 01:19:06 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:06.254963 | orchestrator | 2026-03-07 01:19:06 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:19:06.254998 | orchestrator | 2026-03-07 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:09.307602 | orchestrator | 2026-03-07 01:19:09 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:09.308553 | orchestrator | 2026-03-07 01:19:09 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:19:09.308588 | orchestrator | 2026-03-07 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:12.356732 | orchestrator | 2026-03-07 01:19:12 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:12.358559 | orchestrator | 2026-03-07 01:19:12 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:19:12.358620 | orchestrator | 2026-03-07 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:15.401506 | orchestrator | 2026-03-07 01:19:15 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:15.402528 | orchestrator | 2026-03-07 01:19:15 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:19:15.402556 | orchestrator | 2026-03-07 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:18.443406 | orchestrator | 2026-03-07 01:19:18 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:18.444365 | orchestrator | 2026-03-07 01:19:18 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:19:18.444435 | orchestrator | 2026-03-07 01:19:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:21.481642 | orchestrator | 2026-03-07 01:19:21 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:21.482879 | orchestrator | 2026-03-07 01:19:21 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:19:21.483482 | orchestrator | 2026-03-07 01:19:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:24.534482 | orchestrator | 2026-03-07 01:19:24 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:24.536975 | orchestrator | 2026-03-07 01:19:24 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state STARTED 2026-03-07 01:19:24.537058 | orchestrator | 2026-03-07 01:19:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:27.582153 | orchestrator | 2026-03-07 01:19:27 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:27.588252 | orchestrator | 2026-03-07 01:19:27 | INFO  | Task 53104c7a-640d-4b51-aa00-177595f2a452 is in state SUCCESS 2026-03-07 01:19:27.590449 | orchestrator | 2026-03-07 01:19:27.590536 | orchestrator | 2026-03-07 01:19:27.590554 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:19:27.590569 | orchestrator | 2026-03-07 01:19:27.590583 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:19:27.590597 | orchestrator | Saturday 07 March 2026 01:12:57 +0000 (0:00:00.324) 0:00:00.324 ******** 2026-03-07 01:19:27.590611 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:27.590627 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:19:27.590642 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:19:27.590655 | orchestrator | 2026-03-07 01:19:27.590669 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:19:27.590682 | orchestrator | Saturday 07 March 2026 01:12:57 +0000 (0:00:00.436) 0:00:00.761 ******** 2026-03-07 01:19:27.590696 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-07 01:19:27.590709 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-07 01:19:27.590723 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-07 01:19:27.590737 | orchestrator | 2026-03-07 01:19:27.590751 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-07 01:19:27.590764 | orchestrator | 2026-03-07 01:19:27.590778 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-07 01:19:27.590791 | orchestrator | Saturday 07 March 2026 01:12:58 +0000 (0:00:00.846) 0:00:01.608 ******** 2026-03-07 01:19:27.590804 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:27.590817 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:19:27.590831 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:19:27.590845 | orchestrator | 2026-03-07 01:19:27.590858 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:19:27.590872 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:19:27.590887 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:19:27.590901 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:19:27.590914 | orchestrator | 2026-03-07 01:19:27.590928 | orchestrator | 2026-03-07 01:19:27.590943 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:19:27.590958 | orchestrator | Saturday 07 March 2026 01:14:33 +0000 (0:01:34.870) 0:01:36.478 ******** 2026-03-07 01:19:27.590972 | orchestrator | =============================================================================== 2026-03-07 01:19:27.591227 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 94.87s 2026-03-07 01:19:27.591247 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2026-03-07 01:19:27.591277 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2026-03-07 01:19:27.591292 | orchestrator | 2026-03-07 01:19:27.591350 | orchestrator | 2026-03-07 01:19:27.591365 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:19:27.591379 | orchestrator | 2026-03-07 01:19:27.591393 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-07 01:19:27.591406 | orchestrator | Saturday 07 March 2026 01:10:07 +0000 (0:00:00.635) 0:00:00.635 ******** 2026-03-07 01:19:27.591420 | orchestrator | changed: [testbed-manager] 2026-03-07 01:19:27.591435 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.591448 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:27.591461 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:27.591475 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.591488 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.591502 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.591515 | orchestrator | 2026-03-07 01:19:27.591527 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:19:27.591540 | orchestrator | Saturday 07 March 2026 01:10:08 +0000 (0:00:01.362) 0:00:01.998 ******** 2026-03-07 01:19:27.591550 | orchestrator | changed: [testbed-manager] 2026-03-07 01:19:27.591558 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.591566 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:27.591573 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:27.591581 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.591589 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.591597 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.591604 | orchestrator | 2026-03-07 01:19:27.591612 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:19:27.591620 | orchestrator | Saturday 07 March 2026 01:10:09 +0000 (0:00:00.669) 0:00:02.667 ******** 2026-03-07 01:19:27.591628 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-07 01:19:27.591636 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-07 01:19:27.591644 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-07 01:19:27.591652 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-07 01:19:27.591660 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-07 01:19:27.591668 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-07 01:19:27.591675 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-07 01:19:27.591683 | orchestrator | 2026-03-07 01:19:27.591691 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-07 01:19:27.591699 | orchestrator | 2026-03-07 01:19:27.591724 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-07 01:19:27.591733 | orchestrator | Saturday 07 March 2026 01:10:10 +0000 (0:00:00.841) 0:00:03.509 ******** 2026-03-07 01:19:27.591741 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:27.591748 | orchestrator | 2026-03-07 01:19:27.591756 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-07 01:19:27.591765 | orchestrator | Saturday 07 March 2026 01:10:11 +0000 (0:00:01.269) 0:00:04.778 ******** 2026-03-07 01:19:27.591773 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-07 01:19:27.591799 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-07 01:19:27.591808 | orchestrator | 2026-03-07 01:19:27.591816 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-07 01:19:27.591824 | orchestrator | Saturday 07 March 2026 01:10:16 +0000 (0:00:04.393) 0:00:09.172 ******** 2026-03-07 01:19:27.591832 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 01:19:27.591857 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 01:19:27.591870 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.591883 | orchestrator | 2026-03-07 01:19:27.591896 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-07 01:19:27.591909 | orchestrator | Saturday 07 March 2026 01:10:20 +0000 (0:00:04.596) 0:00:13.768 ******** 2026-03-07 01:19:27.591922 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.591937 | orchestrator | 2026-03-07 01:19:27.592037 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-07 01:19:27.592046 | orchestrator | Saturday 07 March 2026 01:10:22 +0000 (0:00:01.390) 0:00:15.159 ******** 2026-03-07 01:19:27.592054 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.592062 | orchestrator | 2026-03-07 01:19:27.592093 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-07 01:19:27.592101 | orchestrator | Saturday 07 March 2026 01:10:24 +0000 (0:00:02.109) 0:00:17.268 ******** 2026-03-07 01:19:27.592110 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.592118 | orchestrator | 2026-03-07 01:19:27.592126 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-07 01:19:27.592134 | orchestrator | Saturday 07 March 2026 01:10:27 +0000 (0:00:03.513) 0:00:20.782 ******** 2026-03-07 01:19:27.592142 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.592150 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.592158 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.592165 | orchestrator | 2026-03-07 01:19:27.592173 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-07 01:19:27.592181 | orchestrator | Saturday 07 March 2026 01:10:28 +0000 (0:00:00.589) 0:00:21.371 ******** 2026-03-07 01:19:27.592189 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:27.592197 | orchestrator | 2026-03-07 01:19:27.592204 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-07 01:19:27.592212 | orchestrator | Saturday 07 March 2026 01:11:02 +0000 (0:00:33.882) 0:00:55.253 ******** 2026-03-07 01:19:27.592220 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.592228 | orchestrator | 2026-03-07 01:19:27.592236 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-07 01:19:27.592244 | orchestrator | Saturday 07 March 2026 01:11:19 +0000 (0:00:16.951) 0:01:12.205 ******** 2026-03-07 01:19:27.592252 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:27.592260 | orchestrator | 2026-03-07 01:19:27.592268 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-07 01:19:27.592283 | orchestrator | Saturday 07 March 2026 01:11:32 +0000 (0:00:13.726) 0:01:25.932 ******** 2026-03-07 01:19:27.592291 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:27.592299 | orchestrator | 2026-03-07 01:19:27.592330 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-07 01:19:27.592343 | orchestrator | Saturday 07 March 2026 01:11:34 +0000 (0:00:01.342) 0:01:27.274 ******** 2026-03-07 01:19:27.592350 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.592358 | orchestrator | 2026-03-07 01:19:27.592366 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-07 01:19:27.592374 | orchestrator | Saturday 07 March 2026 01:11:34 +0000 (0:00:00.536) 0:01:27.810 ******** 2026-03-07 01:19:27.592382 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:27.592390 | orchestrator | 2026-03-07 01:19:27.592398 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-07 01:19:27.592406 | orchestrator | Saturday 07 March 2026 01:11:35 +0000 (0:00:00.758) 0:01:28.569 ******** 2026-03-07 01:19:27.592414 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:27.592422 | orchestrator | 2026-03-07 01:19:27.592429 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-07 01:19:27.592437 | orchestrator | Saturday 07 March 2026 01:11:55 +0000 (0:00:20.208) 0:01:48.777 ******** 2026-03-07 01:19:27.592445 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.592460 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.592468 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.592475 | orchestrator | 2026-03-07 01:19:27.592483 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-07 01:19:27.592491 | orchestrator | 2026-03-07 01:19:27.592499 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-07 01:19:27.592507 | orchestrator | Saturday 07 March 2026 01:11:56 +0000 (0:00:00.390) 0:01:49.167 ******** 2026-03-07 01:19:27.592515 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:27.592523 | orchestrator | 2026-03-07 01:19:27.592531 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-07 01:19:27.592538 | orchestrator | Saturday 07 March 2026 01:11:56 +0000 (0:00:00.713) 0:01:49.881 ******** 2026-03-07 01:19:27.592546 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.592554 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.592562 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.592570 | orchestrator | 2026-03-07 01:19:27.592577 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-07 01:19:27.592586 | orchestrator | Saturday 07 March 2026 01:11:59 +0000 (0:00:02.132) 0:01:52.013 ******** 2026-03-07 01:19:27.592600 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.592613 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.592627 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.592663 | orchestrator | 2026-03-07 01:19:27.592677 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-07 01:19:27.592692 | orchestrator | Saturday 07 March 2026 01:12:01 +0000 (0:00:02.261) 0:01:54.274 ******** 2026-03-07 01:19:27.592706 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.592720 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.592743 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.592752 | orchestrator | 2026-03-07 01:19:27.592760 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-07 01:19:27.592768 | orchestrator | Saturday 07 March 2026 01:12:01 +0000 (0:00:00.423) 0:01:54.698 ******** 2026-03-07 01:19:27.592776 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-07 01:19:27.592784 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.592792 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-07 01:19:27.592799 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.592807 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-07 01:19:27.592816 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-07 01:19:27.592823 | orchestrator | 2026-03-07 01:19:27.592831 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-07 01:19:27.592839 | orchestrator | Saturday 07 March 2026 01:12:10 +0000 (0:00:09.142) 0:02:03.841 ******** 2026-03-07 01:19:27.592847 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.592855 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.592863 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.592871 | orchestrator | 2026-03-07 01:19:27.592879 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-07 01:19:27.592887 | orchestrator | Saturday 07 March 2026 01:12:11 +0000 (0:00:00.674) 0:02:04.516 ******** 2026-03-07 01:19:27.592894 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-07 01:19:27.593032 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.593043 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-07 01:19:27.593071 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.593080 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-07 01:19:27.593088 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.593096 | orchestrator | 2026-03-07 01:19:27.593104 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-07 01:19:27.593111 | orchestrator | Saturday 07 March 2026 01:12:12 +0000 (0:00:00.903) 0:02:05.419 ******** 2026-03-07 01:19:27.593127 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.593135 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.593143 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.593150 | orchestrator | 2026-03-07 01:19:27.593158 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-07 01:19:27.593166 | orchestrator | Saturday 07 March 2026 01:12:13 +0000 (0:00:00.610) 0:02:06.030 ******** 2026-03-07 01:19:27.593174 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.593182 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.593190 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.593198 | orchestrator | 2026-03-07 01:19:27.593206 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-07 01:19:27.593214 | orchestrator | Saturday 07 March 2026 01:12:14 +0000 (0:00:01.174) 0:02:07.204 ******** 2026-03-07 01:19:27.593227 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.593235 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.593243 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.593251 | orchestrator | 2026-03-07 01:19:27.593259 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-07 01:19:27.593267 | orchestrator | Saturday 07 March 2026 01:12:16 +0000 (0:00:02.619) 0:02:09.824 ******** 2026-03-07 01:19:27.593274 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.593282 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.593290 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:27.593298 | orchestrator | 2026-03-07 01:19:27.593337 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-07 01:19:27.593347 | orchestrator | Saturday 07 March 2026 01:12:40 +0000 (0:00:23.460) 0:02:33.284 ******** 2026-03-07 01:19:27.593356 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.593363 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.593372 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:27.593380 | orchestrator | 2026-03-07 01:19:27.593388 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-07 01:19:27.593395 | orchestrator | Saturday 07 March 2026 01:12:55 +0000 (0:00:14.929) 0:02:48.214 ******** 2026-03-07 01:19:27.593403 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:27.593411 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.593419 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.593427 | orchestrator | 2026-03-07 01:19:27.593435 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-07 01:19:27.593443 | orchestrator | Saturday 07 March 2026 01:12:56 +0000 (0:00:01.265) 0:02:49.479 ******** 2026-03-07 01:19:27.593450 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.593458 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.593466 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.593474 | orchestrator | 2026-03-07 01:19:27.593482 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-07 01:19:27.593490 | orchestrator | Saturday 07 March 2026 01:13:11 +0000 (0:00:14.875) 0:03:04.355 ******** 2026-03-07 01:19:27.593498 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.593506 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.593514 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.593522 | orchestrator | 2026-03-07 01:19:27.593530 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-07 01:19:27.593538 | orchestrator | Saturday 07 March 2026 01:13:12 +0000 (0:00:01.192) 0:03:05.548 ******** 2026-03-07 01:19:27.593546 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.593564 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.593572 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.593580 | orchestrator | 2026-03-07 01:19:27.593588 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-07 01:19:27.593596 | orchestrator | 2026-03-07 01:19:27.593604 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-07 01:19:27.593618 | orchestrator | Saturday 07 March 2026 01:13:13 +0000 (0:00:00.616) 0:03:06.164 ******** 2026-03-07 01:19:27.593627 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:27.593635 | orchestrator | 2026-03-07 01:19:27.593651 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-07 01:19:27.593660 | orchestrator | Saturday 07 March 2026 01:13:13 +0000 (0:00:00.656) 0:03:06.821 ******** 2026-03-07 01:19:27.593667 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-07 01:19:27.593676 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-07 01:19:27.593683 | orchestrator | 2026-03-07 01:19:27.593691 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-07 01:19:27.593699 | orchestrator | Saturday 07 March 2026 01:13:17 +0000 (0:00:03.902) 0:03:10.723 ******** 2026-03-07 01:19:27.593708 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-07 01:19:27.593717 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-07 01:19:27.593725 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-07 01:19:27.593771 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-07 01:19:27.593779 | orchestrator | 2026-03-07 01:19:27.593787 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-07 01:19:27.593795 | orchestrator | Saturday 07 March 2026 01:13:24 +0000 (0:00:06.800) 0:03:17.524 ******** 2026-03-07 01:19:27.593803 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:19:27.593811 | orchestrator | 2026-03-07 01:19:27.593819 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-07 01:19:27.593850 | orchestrator | Saturday 07 March 2026 01:13:27 +0000 (0:00:03.368) 0:03:20.893 ******** 2026-03-07 01:19:27.593860 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-07 01:19:27.593868 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:19:27.593876 | orchestrator | 2026-03-07 01:19:27.593884 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-07 01:19:27.593893 | orchestrator | Saturday 07 March 2026 01:13:32 +0000 (0:00:04.258) 0:03:25.151 ******** 2026-03-07 01:19:27.593900 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:19:27.593908 | orchestrator | 2026-03-07 01:19:27.593916 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-07 01:19:27.593924 | orchestrator | Saturday 07 March 2026 01:13:35 +0000 (0:00:03.468) 0:03:28.619 ******** 2026-03-07 01:19:27.593932 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-07 01:19:27.593940 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-07 01:19:27.593948 | orchestrator | 2026-03-07 01:19:27.593962 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-07 01:19:27.593970 | orchestrator | Saturday 07 March 2026 01:13:43 +0000 (0:00:07.710) 0:03:36.330 ******** 2026-03-07 01:19:27.594009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.594107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.594127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.594148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.594160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.594175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.594184 | orchestrator | 2026-03-07 01:19:27.594192 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-07 01:19:27.594200 | orchestrator | Saturday 07 March 2026 01:13:44 +0000 (0:00:01.439) 0:03:37.769 ******** 2026-03-07 01:19:27.594209 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.594216 | orchestrator | 2026-03-07 01:19:27.594224 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-07 01:19:27.594232 | orchestrator | Saturday 07 March 2026 01:13:44 +0000 (0:00:00.138) 0:03:37.907 ******** 2026-03-07 01:19:27.594240 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.594248 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.594256 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.594263 | orchestrator | 2026-03-07 01:19:27.594272 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-07 01:19:27.594280 | orchestrator | Saturday 07 March 2026 01:13:45 +0000 (0:00:00.660) 0:03:38.568 ******** 2026-03-07 01:19:27.594293 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:19:27.594302 | orchestrator | 2026-03-07 01:19:27.594334 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-07 01:19:27.594343 | orchestrator | Saturday 07 March 2026 01:13:46 +0000 (0:00:00.863) 0:03:39.432 ******** 2026-03-07 01:19:27.594351 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.594359 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.594367 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.594375 | orchestrator | 2026-03-07 01:19:27.594382 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-07 01:19:27.594391 | orchestrator | Saturday 07 March 2026 01:13:46 +0000 (0:00:00.334) 0:03:39.767 ******** 2026-03-07 01:19:27.594399 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:27.594407 | orchestrator | 2026-03-07 01:19:27.594415 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-07 01:19:27.594422 | orchestrator | Saturday 07 March 2026 01:13:47 +0000 (0:00:00.696) 0:03:40.463 ******** 2026-03-07 01:19:27.594436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.594452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.594468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.594478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.594486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.594505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.594519 | orchestrator | 2026-03-07 01:19:27.594528 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-07 01:19:27.594536 | orchestrator | Saturday 07 March 2026 01:13:50 +0000 (0:00:03.105) 0:03:43.569 ******** 2026-03-07 01:19:27.594544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:19:27.594553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.594562 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.594577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:19:27.594586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.594600 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.594613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:19:27.594622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.594630 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.594638 | orchestrator | 2026-03-07 01:19:27.594646 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-07 01:19:27.594654 | orchestrator | Saturday 07 March 2026 01:13:51 +0000 (0:00:00.719) 0:03:44.288 ******** 2026-03-07 01:19:27.595007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:19:27.595078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.595106 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.595126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:19:27.595134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.595140 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.595161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:19:27.595168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.595180 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.595186 | orchestrator | 2026-03-07 01:19:27.595194 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-07 01:19:27.595201 | orchestrator | Saturday 07 March 2026 01:13:52 +0000 (0:00:00.960) 0:03:45.248 ******** 2026-03-07 01:19:27.595211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.595218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.595231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.595242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.595253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.595259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.595266 | orchestrator | 2026-03-07 01:19:27.595272 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-07 01:19:27.595278 | orchestrator | Saturday 07 March 2026 01:13:55 +0000 (0:00:02.832) 0:03:48.081 ******** 2026-03-07 01:19:27.595288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.595296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.595374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.595383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.595395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.595411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.595423 | orchestrator | 2026-03-07 01:19:27.595442 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-07 01:19:27.595453 | orchestrator | Saturday 07 March 2026 01:14:01 +0000 (0:00:06.364) 0:03:54.445 ******** 2026-03-07 01:19:27.595463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:19:27.595486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.595498 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.595511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:19:27.595524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.595536 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.595552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:19:27.595566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.595574 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.595582 | orchestrator | 2026-03-07 01:19:27.595589 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-07 01:19:27.595596 | orchestrator | Saturday 07 March 2026 01:14:02 +0000 (0:00:00.766) 0:03:55.212 ******** 2026-03-07 01:19:27.595604 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.595615 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:27.595623 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:27.595631 | orchestrator | 2026-03-07 01:19:27.595641 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-07 01:19:27.595653 | orchestrator | Saturday 07 March 2026 01:14:03 +0000 (0:00:01.685) 0:03:56.897 ******** 2026-03-07 01:19:27.595664 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.595675 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.595687 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.595699 | orchestrator | 2026-03-07 01:19:27.595712 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-07 01:19:27.595724 | orchestrator | Saturday 07 March 2026 01:14:04 +0000 (0:00:00.378) 0:03:57.276 ******** 2026-03-07 01:19:27.595736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.595756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.595776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:27.595785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.595793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.595801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.595817 | orchestrator | 2026-03-07 01:19:27.595824 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-07 01:19:27.595831 | orchestrator | Saturday 07 March 2026 01:14:06 +0000 (0:00:02.280) 0:03:59.556 ******** 2026-03-07 01:19:27.595839 | orchestrator | 2026-03-07 01:19:27.595846 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-07 01:19:27.595866 | orchestrator | Saturday 07 March 2026 01:14:06 +0000 (0:00:00.153) 0:03:59.710 ******** 2026-03-07 01:19:27.595874 | orchestrator | 2026-03-07 01:19:27.595882 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-07 01:19:27.595889 | orchestrator | Saturday 07 March 2026 01:14:06 +0000 (0:00:00.149) 0:03:59.859 ******** 2026-03-07 01:19:27.595897 | orchestrator | 2026-03-07 01:19:27.595904 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-07 01:19:27.595912 | orchestrator | Saturday 07 March 2026 01:14:07 +0000 (0:00:00.167) 0:04:00.027 ******** 2026-03-07 01:19:27.595919 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.595926 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:27.595932 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:27.595939 | orchestrator | 2026-03-07 01:19:27.595946 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-07 01:19:27.595952 | orchestrator | Saturday 07 March 2026 01:14:24 +0000 (0:00:17.685) 0:04:17.713 ******** 2026-03-07 01:19:27.595958 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.595965 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:27.595971 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:27.595978 | orchestrator | 2026-03-07 01:19:27.595984 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-07 01:19:27.595990 | orchestrator | 2026-03-07 01:19:27.595997 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-07 01:19:27.596003 | orchestrator | Saturday 07 March 2026 01:14:30 +0000 (0:00:06.036) 0:04:23.749 ******** 2026-03-07 01:19:27.596010 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:27.596018 | orchestrator | 2026-03-07 01:19:27.596024 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-07 01:19:27.596030 | orchestrator | Saturday 07 March 2026 01:14:32 +0000 (0:00:01.489) 0:04:25.238 ******** 2026-03-07 01:19:27.596036 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.596043 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.596049 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.596055 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.596061 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.596068 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.596075 | orchestrator | 2026-03-07 01:19:27.596086 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-07 01:19:27.596096 | orchestrator | Saturday 07 March 2026 01:14:32 +0000 (0:00:00.703) 0:04:25.941 ******** 2026-03-07 01:19:27.596106 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.596116 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.596126 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.596136 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:19:27.596147 | orchestrator | 2026-03-07 01:19:27.596164 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-07 01:19:27.596175 | orchestrator | Saturday 07 March 2026 01:14:34 +0000 (0:00:01.184) 0:04:27.126 ******** 2026-03-07 01:19:27.596186 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-07 01:19:27.596198 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-07 01:19:27.596208 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-07 01:19:27.596225 | orchestrator | 2026-03-07 01:19:27.596232 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-07 01:19:27.596238 | orchestrator | Saturday 07 March 2026 01:14:34 +0000 (0:00:00.735) 0:04:27.862 ******** 2026-03-07 01:19:27.596245 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-07 01:19:27.596251 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-07 01:19:27.596257 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-07 01:19:27.596263 | orchestrator | 2026-03-07 01:19:27.596270 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-07 01:19:27.596276 | orchestrator | Saturday 07 March 2026 01:14:36 +0000 (0:00:01.462) 0:04:29.324 ******** 2026-03-07 01:19:27.596283 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-07 01:19:27.596289 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.596295 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-07 01:19:27.596301 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.596328 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-07 01:19:27.596336 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.596342 | orchestrator | 2026-03-07 01:19:27.596348 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-07 01:19:27.596354 | orchestrator | Saturday 07 March 2026 01:14:36 +0000 (0:00:00.566) 0:04:29.891 ******** 2026-03-07 01:19:27.596361 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 01:19:27.596367 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 01:19:27.596374 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.596380 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 01:19:27.596387 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-07 01:19:27.596393 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-07 01:19:27.596399 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 01:19:27.596406 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.596412 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 01:19:27.596418 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 01:19:27.596424 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.596431 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-07 01:19:27.596445 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-07 01:19:27.596451 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-07 01:19:27.596458 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-07 01:19:27.596464 | orchestrator | 2026-03-07 01:19:27.596471 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-07 01:19:27.596478 | orchestrator | Saturday 07 March 2026 01:14:39 +0000 (0:00:02.410) 0:04:32.302 ******** 2026-03-07 01:19:27.596484 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.596491 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.596497 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.596503 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.596510 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.596516 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.596522 | orchestrator | 2026-03-07 01:19:27.596529 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-07 01:19:27.596535 | orchestrator | Saturday 07 March 2026 01:14:40 +0000 (0:00:01.232) 0:04:33.535 ******** 2026-03-07 01:19:27.596542 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.596549 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.596562 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.596569 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.596575 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.596582 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.596588 | orchestrator | 2026-03-07 01:19:27.596595 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-07 01:19:27.596602 | orchestrator | Saturday 07 March 2026 01:14:42 +0000 (0:00:02.032) 0:04:35.567 ******** 2026-03-07 01:19:27.596615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596735 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596746 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596760 | orchestrator | 2026-03-07 01:19:27.596766 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-07 01:19:27.596773 | orchestrator | Saturday 07 March 2026 01:14:44 +0000 (0:00:02.405) 0:04:37.973 ******** 2026-03-07 01:19:27.596779 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:27.596787 | orchestrator | 2026-03-07 01:19:27.596794 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-07 01:19:27.596800 | orchestrator | Saturday 07 March 2026 01:14:46 +0000 (0:00:01.466) 0:04:39.439 ******** 2026-03-07 01:19:27.596813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596837 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596945 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.596952 | orchestrator | 2026-03-07 01:19:27.596959 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-07 01:19:27.596967 | orchestrator | Saturday 07 March 2026 01:14:50 +0000 (0:00:04.158) 0:04:43.597 ******** 2026-03-07 01:19:27.596977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.596985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.596992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597007 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.597024 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.597035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.597052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597063 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.597075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.597086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.597114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597128 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.597141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:19:27.597150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597157 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.597168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:19:27.597175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597182 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.597189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:19:27.597201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597208 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.597214 | orchestrator | 2026-03-07 01:19:27.597220 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-07 01:19:27.597232 | orchestrator | Saturday 07 March 2026 01:14:52 +0000 (0:00:01.918) 0:04:45.516 ******** 2026-03-07 01:19:27.597239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.597246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.597256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597263 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.597271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.597282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.597294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597301 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.597418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.597450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.597463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597477 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.597484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:19:27.597491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597498 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.597515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:19:27.597522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:19:27.597529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.597546 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.597553 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.597563 | orchestrator | 2026-03-07 01:19:27.597571 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-07 01:19:27.597578 | orchestrator | Saturday 07 March 2026 01:14:55 +0000 (0:00:02.619) 0:04:48.135 ******** 2026-03-07 01:19:27.597585 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.597591 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.597597 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.597604 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:19:27.597611 | orchestrator | 2026-03-07 01:19:27.597617 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-07 01:19:27.597624 | orchestrator | Saturday 07 March 2026 01:14:56 +0000 (0:00:01.351) 0:04:49.486 ******** 2026-03-07 01:19:27.597630 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 01:19:27.597636 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-07 01:19:27.597643 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-07 01:19:27.597649 | orchestrator | 2026-03-07 01:19:27.597655 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-07 01:19:27.597662 | orchestrator | Saturday 07 March 2026 01:14:57 +0000 (0:00:01.345) 0:04:50.832 ******** 2026-03-07 01:19:27.597668 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 01:19:27.597674 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-07 01:19:27.597680 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-07 01:19:27.597686 | orchestrator | 2026-03-07 01:19:27.597693 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-07 01:19:27.597699 | orchestrator | Saturday 07 March 2026 01:14:59 +0000 (0:00:01.219) 0:04:52.052 ******** 2026-03-07 01:19:27.597706 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:19:27.597712 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:19:27.597718 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:19:27.597725 | orchestrator | 2026-03-07 01:19:27.597731 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-07 01:19:27.597737 | orchestrator | Saturday 07 March 2026 01:14:59 +0000 (0:00:00.899) 0:04:52.951 ******** 2026-03-07 01:19:27.597743 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:19:27.597750 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:19:27.597756 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:19:27.597763 | orchestrator | 2026-03-07 01:19:27.597769 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-07 01:19:27.597775 | orchestrator | Saturday 07 March 2026 01:15:00 +0000 (0:00:00.965) 0:04:53.917 ******** 2026-03-07 01:19:27.597781 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-07 01:19:27.597789 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-07 01:19:27.597799 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-07 01:19:27.597806 | orchestrator | 2026-03-07 01:19:27.597813 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-07 01:19:27.597819 | orchestrator | Saturday 07 March 2026 01:15:02 +0000 (0:00:01.348) 0:04:55.265 ******** 2026-03-07 01:19:27.597825 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-07 01:19:27.597832 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-07 01:19:27.597838 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-07 01:19:27.597844 | orchestrator | 2026-03-07 01:19:27.597851 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-07 01:19:27.597857 | orchestrator | Saturday 07 March 2026 01:15:03 +0000 (0:00:01.276) 0:04:56.541 ******** 2026-03-07 01:19:27.597863 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-07 01:19:27.597870 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-07 01:19:27.597876 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-07 01:19:27.597882 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-07 01:19:27.597894 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-07 01:19:27.597901 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-07 01:19:27.597907 | orchestrator | 2026-03-07 01:19:27.597913 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-07 01:19:27.597920 | orchestrator | Saturday 07 March 2026 01:15:08 +0000 (0:00:04.660) 0:05:01.202 ******** 2026-03-07 01:19:27.597926 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.597933 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.597939 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.597945 | orchestrator | 2026-03-07 01:19:27.597951 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-07 01:19:27.597957 | orchestrator | Saturday 07 March 2026 01:15:08 +0000 (0:00:00.574) 0:05:01.776 ******** 2026-03-07 01:19:27.597964 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.597970 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.597976 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.597983 | orchestrator | 2026-03-07 01:19:27.597989 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-07 01:19:27.597995 | orchestrator | Saturday 07 March 2026 01:15:09 +0000 (0:00:00.325) 0:05:02.102 ******** 2026-03-07 01:19:27.598002 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.598008 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.598140 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.598153 | orchestrator | 2026-03-07 01:19:27.598161 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-07 01:19:27.598172 | orchestrator | Saturday 07 March 2026 01:15:10 +0000 (0:00:01.203) 0:05:03.305 ******** 2026-03-07 01:19:27.598192 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-07 01:19:27.598209 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-07 01:19:27.598219 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-07 01:19:27.598229 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-07 01:19:27.598239 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-07 01:19:27.598248 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-07 01:19:27.598258 | orchestrator | 2026-03-07 01:19:27.598268 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-07 01:19:27.598277 | orchestrator | Saturday 07 March 2026 01:15:13 +0000 (0:00:03.289) 0:05:06.594 ******** 2026-03-07 01:19:27.598287 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 01:19:27.598296 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 01:19:27.598304 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 01:19:27.598365 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 01:19:27.598374 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.598384 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 01:19:27.598394 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.598404 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 01:19:27.598413 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.598423 | orchestrator | 2026-03-07 01:19:27.598433 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-07 01:19:27.598443 | orchestrator | Saturday 07 March 2026 01:15:17 +0000 (0:00:03.815) 0:05:10.410 ******** 2026-03-07 01:19:27.598453 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.598463 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.598485 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.598496 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:19:27.598505 | orchestrator | 2026-03-07 01:19:27.598511 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-07 01:19:27.598518 | orchestrator | Saturday 07 March 2026 01:15:19 +0000 (0:00:02.127) 0:05:12.537 ******** 2026-03-07 01:19:27.598524 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 01:19:27.598530 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-07 01:19:27.598537 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-07 01:19:27.598543 | orchestrator | 2026-03-07 01:19:27.598563 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-07 01:19:27.598569 | orchestrator | Saturday 07 March 2026 01:15:21 +0000 (0:00:01.493) 0:05:14.031 ******** 2026-03-07 01:19:27.598576 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.598582 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.598588 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.598594 | orchestrator | 2026-03-07 01:19:27.598601 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-07 01:19:27.598607 | orchestrator | Saturday 07 March 2026 01:15:21 +0000 (0:00:00.404) 0:05:14.435 ******** 2026-03-07 01:19:27.598613 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.598619 | orchestrator | 2026-03-07 01:19:27.598626 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-07 01:19:27.598636 | orchestrator | Saturday 07 March 2026 01:15:21 +0000 (0:00:00.129) 0:05:14.565 ******** 2026-03-07 01:19:27.598646 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.598655 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.598664 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.598674 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.598684 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.598694 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.598703 | orchestrator | 2026-03-07 01:19:27.598713 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-07 01:19:27.598724 | orchestrator | Saturday 07 March 2026 01:15:22 +0000 (0:00:00.708) 0:05:15.273 ******** 2026-03-07 01:19:27.598734 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 01:19:27.598743 | orchestrator | 2026-03-07 01:19:27.598754 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-07 01:19:27.598764 | orchestrator | Saturday 07 March 2026 01:15:23 +0000 (0:00:01.150) 0:05:16.424 ******** 2026-03-07 01:19:27.598775 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.598785 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.598796 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.598807 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.598817 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.598827 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.598834 | orchestrator | 2026-03-07 01:19:27.598840 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-07 01:19:27.598847 | orchestrator | Saturday 07 March 2026 01:15:24 +0000 (0:00:00.734) 0:05:17.158 ******** 2026-03-07 01:19:27.598860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598897 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598912 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598929 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598948 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.598993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.599000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.599006 | orchestrator | 2026-03-07 01:19:27.599013 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-07 01:19:27.599019 | orchestrator | Saturday 07 March 2026 01:15:28 +0000 (0:00:04.167) 0:05:21.326 ******** 2026-03-07 01:19:27.599030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.599038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.599048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.599059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.599066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.599073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.599084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.599091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.599098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.599113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.599119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.599126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.599137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.599144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.599150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.599161 | orchestrator | 2026-03-07 01:19:27.599167 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-07 01:19:27.599174 | orchestrator | Saturday 07 March 2026 01:15:35 +0000 (0:00:07.118) 0:05:28.444 ******** 2026-03-07 01:19:27.599180 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.599186 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.599196 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.599203 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.599209 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.599215 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.599222 | orchestrator | 2026-03-07 01:19:27.599228 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-07 01:19:27.599234 | orchestrator | Saturday 07 March 2026 01:15:37 +0000 (0:00:02.244) 0:05:30.689 ******** 2026-03-07 01:19:27.599241 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-07 01:19:27.599247 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-07 01:19:27.599253 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-07 01:19:27.599260 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-07 01:19:27.599267 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-07 01:19:27.599273 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.599279 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.599286 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-07 01:19:27.599292 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-07 01:19:27.599299 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.599305 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-07 01:19:27.599339 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-07 01:19:27.599346 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-07 01:19:27.599352 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-07 01:19:27.599358 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-07 01:19:27.599365 | orchestrator | 2026-03-07 01:19:27.599374 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-07 01:19:27.599384 | orchestrator | Saturday 07 March 2026 01:15:41 +0000 (0:00:03.656) 0:05:34.345 ******** 2026-03-07 01:19:27.599394 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.599404 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.599414 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.599425 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.599435 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.599446 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.599454 | orchestrator | 2026-03-07 01:19:27.599460 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-07 01:19:27.599466 | orchestrator | Saturday 07 March 2026 01:15:41 +0000 (0:00:00.659) 0:05:35.004 ******** 2026-03-07 01:19:27.599478 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-07 01:19:27.599484 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-07 01:19:27.599497 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-07 01:19:27.599504 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-07 01:19:27.599515 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-07 01:19:27.599525 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-07 01:19:27.599536 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-07 01:19:27.599547 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-07 01:19:27.599556 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-07 01:19:27.599567 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-07 01:19:27.599573 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.599580 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-07 01:19:27.599586 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.599592 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-07 01:19:27.599598 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.599604 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:19:27.599610 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:19:27.599617 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:19:27.599628 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:19:27.599634 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:19:27.599641 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:19:27.599647 | orchestrator | 2026-03-07 01:19:27.599654 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-07 01:19:27.599660 | orchestrator | Saturday 07 March 2026 01:15:48 +0000 (0:00:06.420) 0:05:41.425 ******** 2026-03-07 01:19:27.599666 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 01:19:27.599672 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 01:19:27.599679 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 01:19:27.599685 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:19:27.599691 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-07 01:19:27.599697 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-07 01:19:27.599703 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-07 01:19:27.599709 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:19:27.599715 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:19:27.599721 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 01:19:27.599733 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 01:19:27.599739 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 01:19:27.599745 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-07 01:19:27.599751 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.599758 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:19:27.599764 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-07 01:19:27.599770 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.599776 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-07 01:19:27.599783 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.599789 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:19:27.599801 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:19:27.599812 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:19:27.599822 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:19:27.599833 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:19:27.599844 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:19:27.599855 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:19:27.599862 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:19:27.599868 | orchestrator | 2026-03-07 01:19:27.599874 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-07 01:19:27.599880 | orchestrator | Saturday 07 March 2026 01:15:56 +0000 (0:00:07.864) 0:05:49.289 ******** 2026-03-07 01:19:27.599886 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.599892 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.599899 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.599905 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.599911 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.599917 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.599923 | orchestrator | 2026-03-07 01:19:27.599931 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-07 01:19:27.599941 | orchestrator | Saturday 07 March 2026 01:15:57 +0000 (0:00:00.931) 0:05:50.221 ******** 2026-03-07 01:19:27.599952 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.599963 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.599974 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.599985 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.599995 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.600006 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.600014 | orchestrator | 2026-03-07 01:19:27.600020 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-07 01:19:27.600026 | orchestrator | Saturday 07 March 2026 01:15:57 +0000 (0:00:00.660) 0:05:50.881 ******** 2026-03-07 01:19:27.600032 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.600039 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.600045 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.600056 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.600067 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.600077 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.600088 | orchestrator | 2026-03-07 01:19:27.600096 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-07 01:19:27.600108 | orchestrator | Saturday 07 March 2026 01:16:00 +0000 (0:00:02.262) 0:05:53.144 ******** 2026-03-07 01:19:27.600123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.600130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.600142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.600149 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.600155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.600162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.600172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.600183 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.600190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:19:27.600197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.600203 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.600217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:19:27.600228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:19:27.600239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.600256 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.600272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:19:27.600284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.600295 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.600324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:19:27.600344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:19:27.600356 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.600367 | orchestrator | 2026-03-07 01:19:27.600379 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-07 01:19:27.600390 | orchestrator | Saturday 07 March 2026 01:16:02 +0000 (0:00:02.035) 0:05:55.180 ******** 2026-03-07 01:19:27.600401 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-07 01:19:27.600412 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-07 01:19:27.600423 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.600432 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-07 01:19:27.600438 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-07 01:19:27.600444 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.600450 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-07 01:19:27.600457 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-07 01:19:27.600463 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.600469 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-07 01:19:27.600475 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-07 01:19:27.600481 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.600495 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-07 01:19:27.600501 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-07 01:19:27.600510 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.600521 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-07 01:19:27.600530 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-07 01:19:27.600536 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.600542 | orchestrator | 2026-03-07 01:19:27.600548 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-07 01:19:27.600555 | orchestrator | Saturday 07 March 2026 01:16:03 +0000 (0:00:01.021) 0:05:56.202 ******** 2026-03-07 01:19:27.600565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600609 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600626 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:27.600704 | orchestrator | 2026-03-07 01:19:27.600711 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-07 01:19:27.600718 | orchestrator | Saturday 07 March 2026 01:16:06 +0000 (0:00:03.303) 0:05:59.505 ******** 2026-03-07 01:19:27.600724 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.600731 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.600737 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.600743 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.600749 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.600755 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.600761 | orchestrator | 2026-03-07 01:19:27.600768 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:19:27.600774 | orchestrator | Saturday 07 March 2026 01:16:07 +0000 (0:00:00.935) 0:06:00.441 ******** 2026-03-07 01:19:27.600789 | orchestrator | 2026-03-07 01:19:27.600795 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:19:27.600805 | orchestrator | Saturday 07 March 2026 01:16:07 +0000 (0:00:00.146) 0:06:00.588 ******** 2026-03-07 01:19:27.600811 | orchestrator | 2026-03-07 01:19:27.600818 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:19:27.600824 | orchestrator | Saturday 07 March 2026 01:16:07 +0000 (0:00:00.149) 0:06:00.737 ******** 2026-03-07 01:19:27.600830 | orchestrator | 2026-03-07 01:19:27.600837 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:19:27.600843 | orchestrator | Saturday 07 March 2026 01:16:07 +0000 (0:00:00.141) 0:06:00.878 ******** 2026-03-07 01:19:27.600849 | orchestrator | 2026-03-07 01:19:27.600855 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:19:27.600862 | orchestrator | Saturday 07 March 2026 01:16:08 +0000 (0:00:00.351) 0:06:01.229 ******** 2026-03-07 01:19:27.600868 | orchestrator | 2026-03-07 01:19:27.600874 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:19:27.600881 | orchestrator | Saturday 07 March 2026 01:16:08 +0000 (0:00:00.185) 0:06:01.415 ******** 2026-03-07 01:19:27.600887 | orchestrator | 2026-03-07 01:19:27.600893 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-07 01:19:27.600899 | orchestrator | Saturday 07 March 2026 01:16:08 +0000 (0:00:00.195) 0:06:01.611 ******** 2026-03-07 01:19:27.600905 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.600912 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:27.600918 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:27.600924 | orchestrator | 2026-03-07 01:19:27.600930 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-07 01:19:27.600937 | orchestrator | Saturday 07 March 2026 01:16:21 +0000 (0:00:13.039) 0:06:14.650 ******** 2026-03-07 01:19:27.600943 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.600949 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:27.600955 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:27.600961 | orchestrator | 2026-03-07 01:19:27.600968 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-07 01:19:27.600974 | orchestrator | Saturday 07 March 2026 01:16:36 +0000 (0:00:14.379) 0:06:29.030 ******** 2026-03-07 01:19:27.600980 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.600986 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.600992 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.600998 | orchestrator | 2026-03-07 01:19:27.601004 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-07 01:19:27.601011 | orchestrator | Saturday 07 March 2026 01:16:59 +0000 (0:00:23.905) 0:06:52.936 ******** 2026-03-07 01:19:27.601017 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.601023 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.601029 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.601035 | orchestrator | 2026-03-07 01:19:27.601042 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-07 01:19:27.601051 | orchestrator | Saturday 07 March 2026 01:17:36 +0000 (0:00:36.337) 0:07:29.273 ******** 2026-03-07 01:19:27.601058 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.601064 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.601070 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.601076 | orchestrator | 2026-03-07 01:19:27.601082 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-07 01:19:27.601089 | orchestrator | Saturday 07 March 2026 01:17:37 +0000 (0:00:00.861) 0:07:30.135 ******** 2026-03-07 01:19:27.601095 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.601101 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.601107 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.601114 | orchestrator | 2026-03-07 01:19:27.601120 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-07 01:19:27.601131 | orchestrator | Saturday 07 March 2026 01:17:37 +0000 (0:00:00.805) 0:07:30.941 ******** 2026-03-07 01:19:27.601138 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:19:27.601144 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:19:27.601150 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:19:27.601156 | orchestrator | 2026-03-07 01:19:27.601166 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-07 01:19:27.601172 | orchestrator | Saturday 07 March 2026 01:18:04 +0000 (0:00:26.117) 0:07:57.058 ******** 2026-03-07 01:19:27.601179 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.601185 | orchestrator | 2026-03-07 01:19:27.601191 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-07 01:19:27.601197 | orchestrator | Saturday 07 March 2026 01:18:04 +0000 (0:00:00.136) 0:07:57.195 ******** 2026-03-07 01:19:27.601204 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.601210 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.601216 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.601222 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.601228 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.601235 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-07 01:19:27.601241 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-07 01:19:27.601247 | orchestrator | 2026-03-07 01:19:27.601254 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-07 01:19:27.601260 | orchestrator | Saturday 07 March 2026 01:18:28 +0000 (0:00:24.327) 0:08:21.522 ******** 2026-03-07 01:19:27.601267 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.601273 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.601279 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.601285 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.601293 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.601303 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.601330 | orchestrator | 2026-03-07 01:19:27.601340 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-07 01:19:27.601351 | orchestrator | Saturday 07 March 2026 01:18:40 +0000 (0:00:12.115) 0:08:33.638 ******** 2026-03-07 01:19:27.601361 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.601372 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.601382 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.601399 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.601406 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.601412 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-03-07 01:19:27.601418 | orchestrator | 2026-03-07 01:19:27.601425 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-07 01:19:27.601431 | orchestrator | Saturday 07 March 2026 01:18:45 +0000 (0:00:04.641) 0:08:38.279 ******** 2026-03-07 01:19:27.601437 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-07 01:19:27.601444 | orchestrator | 2026-03-07 01:19:27.601450 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-07 01:19:27.601456 | orchestrator | Saturday 07 March 2026 01:18:59 +0000 (0:00:14.163) 0:08:52.443 ******** 2026-03-07 01:19:27.601463 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-07 01:19:27.601469 | orchestrator | 2026-03-07 01:19:27.601476 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-07 01:19:27.601482 | orchestrator | Saturday 07 March 2026 01:19:01 +0000 (0:00:01.823) 0:08:54.267 ******** 2026-03-07 01:19:27.601488 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.601494 | orchestrator | 2026-03-07 01:19:27.601501 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-07 01:19:27.601507 | orchestrator | Saturday 07 March 2026 01:19:03 +0000 (0:00:02.718) 0:08:56.985 ******** 2026-03-07 01:19:27.601519 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-07 01:19:27.601526 | orchestrator | 2026-03-07 01:19:27.601532 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-07 01:19:27.601538 | orchestrator | Saturday 07 March 2026 01:19:16 +0000 (0:00:12.377) 0:09:09.363 ******** 2026-03-07 01:19:27.601545 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:19:27.601551 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:19:27.601562 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:19:27.601572 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:27.601583 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:19:27.601589 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:19:27.601596 | orchestrator | 2026-03-07 01:19:27.601602 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-07 01:19:27.601608 | orchestrator | 2026-03-07 01:19:27.601614 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-07 01:19:27.601620 | orchestrator | Saturday 07 March 2026 01:19:18 +0000 (0:00:01.978) 0:09:11.342 ******** 2026-03-07 01:19:27.601627 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:27.601633 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:27.601639 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:27.601645 | orchestrator | 2026-03-07 01:19:27.601651 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-07 01:19:27.601658 | orchestrator | 2026-03-07 01:19:27.601664 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-07 01:19:27.601678 | orchestrator | Saturday 07 March 2026 01:19:19 +0000 (0:00:01.342) 0:09:12.684 ******** 2026-03-07 01:19:27.601684 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.601690 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.601697 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.601703 | orchestrator | 2026-03-07 01:19:27.601709 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-07 01:19:27.601718 | orchestrator | 2026-03-07 01:19:27.601729 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-07 01:19:27.601739 | orchestrator | Saturday 07 March 2026 01:19:20 +0000 (0:00:00.949) 0:09:13.634 ******** 2026-03-07 01:19:27.601749 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-07 01:19:27.601759 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-07 01:19:27.601769 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-07 01:19:27.601779 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-07 01:19:27.601788 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-07 01:19:27.601794 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-07 01:19:27.601801 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:19:27.601808 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-07 01:19:27.601819 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-07 01:19:27.601829 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-07 01:19:27.601839 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-07 01:19:27.601850 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-07 01:19:27.601856 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-07 01:19:27.601862 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:19:27.601869 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-07 01:19:27.601875 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-07 01:19:27.601881 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-07 01:19:27.601887 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-07 01:19:27.601893 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-07 01:19:27.601899 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-07 01:19:27.601912 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:19:27.601918 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-07 01:19:27.601925 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-07 01:19:27.601931 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-07 01:19:27.601937 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-07 01:19:27.601943 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-07 01:19:27.601949 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-07 01:19:27.601955 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.601967 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-07 01:19:27.601974 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-07 01:19:27.601980 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-07 01:19:27.601986 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-07 01:19:27.601993 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-07 01:19:27.601999 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-07 01:19:27.602005 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.602051 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-07 01:19:27.602061 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-07 01:19:27.602067 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-07 01:19:27.602073 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-07 01:19:27.602079 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-07 01:19:27.602086 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-07 01:19:27.602092 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.602099 | orchestrator | 2026-03-07 01:19:27.602105 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-07 01:19:27.602112 | orchestrator | 2026-03-07 01:19:27.602118 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-07 01:19:27.602124 | orchestrator | Saturday 07 March 2026 01:19:22 +0000 (0:00:01.598) 0:09:15.233 ******** 2026-03-07 01:19:27.602131 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-07 01:19:27.602137 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-07 01:19:27.602144 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.602150 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-07 01:19:27.602156 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-07 01:19:27.602162 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.602168 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-07 01:19:27.602174 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-07 01:19:27.602181 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.602187 | orchestrator | 2026-03-07 01:19:27.602193 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-07 01:19:27.602199 | orchestrator | 2026-03-07 01:19:27.602205 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-07 01:19:27.602212 | orchestrator | Saturday 07 March 2026 01:19:23 +0000 (0:00:00.897) 0:09:16.130 ******** 2026-03-07 01:19:27.602218 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.602224 | orchestrator | 2026-03-07 01:19:27.602235 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-07 01:19:27.602241 | orchestrator | 2026-03-07 01:19:27.602247 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-07 01:19:27.602253 | orchestrator | Saturday 07 March 2026 01:19:23 +0000 (0:00:00.738) 0:09:16.869 ******** 2026-03-07 01:19:27.602265 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:27.602276 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:27.602297 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:27.602338 | orchestrator | 2026-03-07 01:19:27.602349 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:19:27.602361 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:19:27.602372 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-03-07 01:19:27.602379 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-07 01:19:27.602385 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-07 01:19:27.602392 | orchestrator | testbed-node-3 : ok=40  changed=27  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-07 01:19:27.602398 | orchestrator | testbed-node-4 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-07 01:19:27.602404 | orchestrator | testbed-node-5 : ok=44  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-07 01:19:27.602410 | orchestrator | 2026-03-07 01:19:27.602416 | orchestrator | 2026-03-07 01:19:27.602422 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:19:27.602429 | orchestrator | Saturday 07 March 2026 01:19:24 +0000 (0:00:00.730) 0:09:17.599 ******** 2026-03-07 01:19:27.602435 | orchestrator | =============================================================================== 2026-03-07 01:19:27.602441 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.34s 2026-03-07 01:19:27.602447 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.88s 2026-03-07 01:19:27.602454 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.12s 2026-03-07 01:19:27.602460 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.33s 2026-03-07 01:19:27.602466 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 23.91s 2026-03-07 01:19:27.602477 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.46s 2026-03-07 01:19:27.602483 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.21s 2026-03-07 01:19:27.602489 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.69s 2026-03-07 01:19:27.602495 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.95s 2026-03-07 01:19:27.602502 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.93s 2026-03-07 01:19:27.602508 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.88s 2026-03-07 01:19:27.602514 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.38s 2026-03-07 01:19:27.602520 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.16s 2026-03-07 01:19:27.602530 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.73s 2026-03-07 01:19:27.602540 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.04s 2026-03-07 01:19:27.602550 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.38s 2026-03-07 01:19:27.602560 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 12.12s 2026-03-07 01:19:27.602570 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.14s 2026-03-07 01:19:27.602581 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.86s 2026-03-07 01:19:27.602591 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.71s 2026-03-07 01:19:27.602610 | orchestrator | 2026-03-07 01:19:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:30.639619 | orchestrator | 2026-03-07 01:19:30 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:30.639718 | orchestrator | 2026-03-07 01:19:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:33.682704 | orchestrator | 2026-03-07 01:19:33 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:33.682792 | orchestrator | 2026-03-07 01:19:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:36.722264 | orchestrator | 2026-03-07 01:19:36 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:36.722382 | orchestrator | 2026-03-07 01:19:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:39.770537 | orchestrator | 2026-03-07 01:19:39 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:39.770619 | orchestrator | 2026-03-07 01:19:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:42.820794 | orchestrator | 2026-03-07 01:19:42 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:42.820908 | orchestrator | 2026-03-07 01:19:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:45.864504 | orchestrator | 2026-03-07 01:19:45 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:45.864586 | orchestrator | 2026-03-07 01:19:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:48.909583 | orchestrator | 2026-03-07 01:19:48 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:48.909688 | orchestrator | 2026-03-07 01:19:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:51.962394 | orchestrator | 2026-03-07 01:19:51 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state STARTED 2026-03-07 01:19:51.962504 | orchestrator | 2026-03-07 01:19:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:19:55.002877 | orchestrator | 2026-03-07 01:19:54 | INFO  | Task dcccee1c-f381-4b56-bc80-79ff84f652d8 is in state SUCCESS 2026-03-07 01:19:55.003128 | orchestrator | 2026-03-07 01:19:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:19:55.005032 | orchestrator | 2026-03-07 01:19:55.005084 | orchestrator | 2026-03-07 01:19:55.005097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:19:55.005106 | orchestrator | 2026-03-07 01:19:55.005114 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:19:55.005123 | orchestrator | Saturday 07 March 2026 01:14:39 +0000 (0:00:00.314) 0:00:00.314 ******** 2026-03-07 01:19:55.005131 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:55.005141 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:19:55.005149 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:19:55.005156 | orchestrator | 2026-03-07 01:19:55.005165 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:19:55.005172 | orchestrator | Saturday 07 March 2026 01:14:39 +0000 (0:00:00.328) 0:00:00.643 ******** 2026-03-07 01:19:55.005180 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-07 01:19:55.005188 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-07 01:19:55.005196 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-07 01:19:55.005203 | orchestrator | 2026-03-07 01:19:55.005211 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-07 01:19:55.005219 | orchestrator | 2026-03-07 01:19:55.005228 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:19:55.005237 | orchestrator | Saturday 07 March 2026 01:14:39 +0000 (0:00:00.494) 0:00:01.137 ******** 2026-03-07 01:19:55.005313 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:55.005326 | orchestrator | 2026-03-07 01:19:55.005334 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-07 01:19:55.005343 | orchestrator | Saturday 07 March 2026 01:14:40 +0000 (0:00:00.710) 0:00:01.848 ******** 2026-03-07 01:19:55.005352 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-07 01:19:55.005361 | orchestrator | 2026-03-07 01:19:55.005369 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-07 01:19:55.005378 | orchestrator | Saturday 07 March 2026 01:14:44 +0000 (0:00:03.746) 0:00:05.594 ******** 2026-03-07 01:19:55.005385 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-07 01:19:55.005392 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-07 01:19:55.005397 | orchestrator | 2026-03-07 01:19:55.005402 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-07 01:19:55.005407 | orchestrator | Saturday 07 March 2026 01:14:51 +0000 (0:00:07.495) 0:00:13.090 ******** 2026-03-07 01:19:55.005412 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:19:55.005418 | orchestrator | 2026-03-07 01:19:55.005423 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-07 01:19:55.005428 | orchestrator | Saturday 07 March 2026 01:14:55 +0000 (0:00:03.477) 0:00:16.567 ******** 2026-03-07 01:19:55.005433 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-07 01:19:55.005439 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-07 01:19:55.005444 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:19:55.005449 | orchestrator | 2026-03-07 01:19:55.005454 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-07 01:19:55.005459 | orchestrator | Saturday 07 March 2026 01:15:04 +0000 (0:00:08.714) 0:00:25.282 ******** 2026-03-07 01:19:55.005464 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:19:55.005469 | orchestrator | 2026-03-07 01:19:55.005475 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-07 01:19:55.005480 | orchestrator | Saturday 07 March 2026 01:15:07 +0000 (0:00:03.759) 0:00:29.042 ******** 2026-03-07 01:19:55.005485 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-07 01:19:55.005490 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-07 01:19:55.005495 | orchestrator | 2026-03-07 01:19:55.005512 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-07 01:19:55.005517 | orchestrator | Saturday 07 March 2026 01:15:15 +0000 (0:00:07.373) 0:00:36.415 ******** 2026-03-07 01:19:55.005522 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-07 01:19:55.005527 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-07 01:19:55.005532 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-07 01:19:55.005537 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-07 01:19:55.005542 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-07 01:19:55.005547 | orchestrator | 2026-03-07 01:19:55.005553 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:19:55.005557 | orchestrator | Saturday 07 March 2026 01:15:31 +0000 (0:00:16.670) 0:00:53.086 ******** 2026-03-07 01:19:55.005563 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:55.005568 | orchestrator | 2026-03-07 01:19:55.005573 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-07 01:19:55.005578 | orchestrator | Saturday 07 March 2026 01:15:32 +0000 (0:00:00.816) 0:00:53.903 ******** 2026-03-07 01:19:55.005589 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.005594 | orchestrator | 2026-03-07 01:19:55.005599 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-07 01:19:55.005608 | orchestrator | Saturday 07 March 2026 01:15:38 +0000 (0:00:05.811) 0:00:59.714 ******** 2026-03-07 01:19:55.005616 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.005624 | orchestrator | 2026-03-07 01:19:55.005632 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-07 01:19:55.005654 | orchestrator | Saturday 07 March 2026 01:15:43 +0000 (0:00:05.136) 0:01:04.851 ******** 2026-03-07 01:19:55.005663 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:55.005671 | orchestrator | 2026-03-07 01:19:55.005680 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-07 01:19:55.005689 | orchestrator | Saturday 07 March 2026 01:15:47 +0000 (0:00:03.645) 0:01:08.496 ******** 2026-03-07 01:19:55.005697 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-07 01:19:55.005705 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-07 01:19:55.005713 | orchestrator | 2026-03-07 01:19:55.005718 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-07 01:19:55.005723 | orchestrator | Saturday 07 March 2026 01:15:57 +0000 (0:00:10.098) 0:01:18.595 ******** 2026-03-07 01:19:55.005728 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-07 01:19:55.005734 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-07 01:19:55.005741 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-07 01:19:55.005748 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-07 01:19:55.005753 | orchestrator | 2026-03-07 01:19:55.005758 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-07 01:19:55.005763 | orchestrator | Saturday 07 March 2026 01:16:14 +0000 (0:00:17.149) 0:01:35.744 ******** 2026-03-07 01:19:55.005768 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.005773 | orchestrator | 2026-03-07 01:19:55.005779 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-07 01:19:55.005784 | orchestrator | Saturday 07 March 2026 01:16:19 +0000 (0:00:04.917) 0:01:40.662 ******** 2026-03-07 01:19:55.005789 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.005794 | orchestrator | 2026-03-07 01:19:55.005799 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-07 01:19:55.005804 | orchestrator | Saturday 07 March 2026 01:16:24 +0000 (0:00:05.562) 0:01:46.225 ******** 2026-03-07 01:19:55.005809 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:55.005814 | orchestrator | 2026-03-07 01:19:55.005819 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-07 01:19:55.005824 | orchestrator | Saturday 07 March 2026 01:16:25 +0000 (0:00:00.249) 0:01:46.474 ******** 2026-03-07 01:19:55.005829 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:55.005834 | orchestrator | 2026-03-07 01:19:55.005839 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:19:55.005844 | orchestrator | Saturday 07 March 2026 01:16:30 +0000 (0:00:05.102) 0:01:51.577 ******** 2026-03-07 01:19:55.005849 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:55.005854 | orchestrator | 2026-03-07 01:19:55.005859 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-07 01:19:55.005864 | orchestrator | Saturday 07 March 2026 01:16:31 +0000 (0:00:01.191) 0:01:52.769 ******** 2026-03-07 01:19:55.005869 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.005875 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.005885 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.005890 | orchestrator | 2026-03-07 01:19:55.005895 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-07 01:19:55.005900 | orchestrator | Saturday 07 March 2026 01:16:37 +0000 (0:00:05.756) 0:01:58.525 ******** 2026-03-07 01:19:55.005905 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.005910 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.005915 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.005920 | orchestrator | 2026-03-07 01:19:55.005925 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-07 01:19:55.005934 | orchestrator | Saturday 07 March 2026 01:16:42 +0000 (0:00:05.071) 0:02:03.596 ******** 2026-03-07 01:19:55.005939 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.005944 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.005950 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.005955 | orchestrator | 2026-03-07 01:19:55.005960 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-07 01:19:55.005965 | orchestrator | Saturday 07 March 2026 01:16:43 +0000 (0:00:01.041) 0:02:04.637 ******** 2026-03-07 01:19:55.005970 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:55.005975 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:19:55.005980 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:19:55.005985 | orchestrator | 2026-03-07 01:19:55.005990 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-07 01:19:55.005995 | orchestrator | Saturday 07 March 2026 01:16:45 +0000 (0:00:02.384) 0:02:07.022 ******** 2026-03-07 01:19:55.006000 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.006006 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.006011 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.006048 | orchestrator | 2026-03-07 01:19:55.006055 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-07 01:19:55.006060 | orchestrator | Saturday 07 March 2026 01:16:47 +0000 (0:00:01.458) 0:02:08.480 ******** 2026-03-07 01:19:55.006065 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.006070 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.006076 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.006081 | orchestrator | 2026-03-07 01:19:55.006086 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-07 01:19:55.006091 | orchestrator | Saturday 07 March 2026 01:16:48 +0000 (0:00:01.240) 0:02:09.720 ******** 2026-03-07 01:19:55.006101 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.006107 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.006113 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.006121 | orchestrator | 2026-03-07 01:19:55.006135 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-07 01:19:55.006143 | orchestrator | Saturday 07 March 2026 01:16:50 +0000 (0:00:02.157) 0:02:11.877 ******** 2026-03-07 01:19:55.006156 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.006166 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.006174 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.006182 | orchestrator | 2026-03-07 01:19:55.006189 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-07 01:19:55.006197 | orchestrator | Saturday 07 March 2026 01:16:52 +0000 (0:00:01.942) 0:02:13.820 ******** 2026-03-07 01:19:55.006205 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:55.006213 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:19:55.006221 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:19:55.006229 | orchestrator | 2026-03-07 01:19:55.006237 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-07 01:19:55.006245 | orchestrator | Saturday 07 March 2026 01:16:53 +0000 (0:00:00.748) 0:02:14.568 ******** 2026-03-07 01:19:55.006253 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:19:55.006261 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:19:55.006317 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:55.006335 | orchestrator | 2026-03-07 01:19:55.006344 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:19:55.006353 | orchestrator | Saturday 07 March 2026 01:16:58 +0000 (0:00:04.839) 0:02:19.407 ******** 2026-03-07 01:19:55.006361 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:55.006370 | orchestrator | 2026-03-07 01:19:55.006378 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-07 01:19:55.006386 | orchestrator | Saturday 07 March 2026 01:16:58 +0000 (0:00:00.826) 0:02:20.234 ******** 2026-03-07 01:19:55.006395 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:55.006403 | orchestrator | 2026-03-07 01:19:55.006411 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-07 01:19:55.006420 | orchestrator | Saturday 07 March 2026 01:17:03 +0000 (0:00:04.252) 0:02:24.486 ******** 2026-03-07 01:19:55.006429 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:55.006437 | orchestrator | 2026-03-07 01:19:55.006446 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-07 01:19:55.006454 | orchestrator | Saturday 07 March 2026 01:17:06 +0000 (0:00:03.394) 0:02:27.881 ******** 2026-03-07 01:19:55.006462 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-07 01:19:55.006471 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-07 01:19:55.006479 | orchestrator | 2026-03-07 01:19:55.006487 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-07 01:19:55.006497 | orchestrator | Saturday 07 March 2026 01:17:13 +0000 (0:00:07.152) 0:02:35.034 ******** 2026-03-07 01:19:55.006505 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:55.006514 | orchestrator | 2026-03-07 01:19:55.006522 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-07 01:19:55.006531 | orchestrator | Saturday 07 March 2026 01:17:17 +0000 (0:00:03.791) 0:02:38.825 ******** 2026-03-07 01:19:55.006539 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:19:55.006547 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:19:55.006556 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:19:55.006565 | orchestrator | 2026-03-07 01:19:55.006573 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-07 01:19:55.006582 | orchestrator | Saturday 07 March 2026 01:17:17 +0000 (0:00:00.356) 0:02:39.181 ******** 2026-03-07 01:19:55.006601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.006631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.006645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.006652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.006659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.006665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.006674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.006681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.006711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.006717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.006724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.006729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.006736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.006747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.006753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.006762 | orchestrator | 2026-03-07 01:19:55.006767 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-07 01:19:55.006773 | orchestrator | Saturday 07 March 2026 01:17:20 +0000 (0:00:02.501) 0:02:41.683 ******** 2026-03-07 01:19:55.006778 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:55.006783 | orchestrator | 2026-03-07 01:19:55.006793 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-07 01:19:55.006798 | orchestrator | Saturday 07 March 2026 01:17:20 +0000 (0:00:00.158) 0:02:41.842 ******** 2026-03-07 01:19:55.006803 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:55.006809 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:55.006814 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:55.006819 | orchestrator | 2026-03-07 01:19:55.006824 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-07 01:19:55.006829 | orchestrator | Saturday 07 March 2026 01:17:21 +0000 (0:00:00.562) 0:02:42.404 ******** 2026-03-07 01:19:55.006835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:19:55.006841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:19:55.006846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.006855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.006864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:19:55.006870 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:55.006886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:19:55.006892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:19:55.006898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.006903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.006916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:19:55.006926 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:55.006931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:19:55.006948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:19:55.006954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.006959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.006965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:19:55.006970 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:55.006975 | orchestrator | 2026-03-07 01:19:55.006980 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:19:55.006986 | orchestrator | Saturday 07 March 2026 01:17:21 +0000 (0:00:00.794) 0:02:43.198 ******** 2026-03-07 01:19:55.006991 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:19:55.006996 | orchestrator | 2026-03-07 01:19:55.007001 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-07 01:19:55.007010 | orchestrator | Saturday 07 March 2026 01:17:22 +0000 (0:00:00.705) 0:02:43.904 ******** 2026-03-07 01:19:55.007016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.007056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.007063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.007068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.007074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.007079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.007091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007153 | orchestrator | 2026-03-07 01:19:55.007158 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-07 01:19:55.007163 | orchestrator | Saturday 07 March 2026 01:17:28 +0000 (0:00:05.673) 0:02:49.577 ******** 2026-03-07 01:19:55.007169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:19:55.007174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:19:55.007179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:19:55.007202 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:55.007211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:19:55.007217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:19:55.007222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:19:55.007242 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:55.007250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:19:55.007256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:19:55.007316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:19:55.007338 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:55.007343 | orchestrator | 2026-03-07 01:19:55.007348 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-07 01:19:55.007353 | orchestrator | Saturday 07 March 2026 01:17:29 +0000 (0:00:00.817) 0:02:50.394 ******** 2026-03-07 01:19:55.007362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:19:55.007368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:19:55.007373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:19:55.007396 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:55.007402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:19:55.007407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:19:55.007417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:19:55.007438 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:55.007443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:19:55.007453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:19:55.007458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:19:55.007472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:19:55.007477 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:55.007482 | orchestrator | 2026-03-07 01:19:55.007488 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-07 01:19:55.007493 | orchestrator | Saturday 07 March 2026 01:17:30 +0000 (0:00:01.141) 0:02:51.535 ******** 2026-03-07 01:19:55.007503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.007512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.007518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.007526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.007532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.007537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.007546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007609 | orchestrator | 2026-03-07 01:19:55.007614 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-07 01:19:55.007619 | orchestrator | Saturday 07 March 2026 01:17:35 +0000 (0:00:05.261) 0:02:56.797 ******** 2026-03-07 01:19:55.007625 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-07 01:19:55.007630 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-07 01:19:55.007635 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-07 01:19:55.007641 | orchestrator | 2026-03-07 01:19:55.007646 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-07 01:19:55.007651 | orchestrator | Saturday 07 March 2026 01:17:37 +0000 (0:00:02.079) 0:02:58.877 ******** 2026-03-07 01:19:55.007659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.007665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.007675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.007684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.007689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.007695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.007700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.007761 | orchestrator | 2026-03-07 01:19:55.007766 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-07 01:19:55.007772 | orchestrator | Saturday 07 March 2026 01:18:00 +0000 (0:00:23.389) 0:03:22.267 ******** 2026-03-07 01:19:55.007780 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.007785 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.007790 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.007796 | orchestrator | 2026-03-07 01:19:55.007801 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-07 01:19:55.007806 | orchestrator | Saturday 07 March 2026 01:18:02 +0000 (0:00:01.558) 0:03:23.825 ******** 2026-03-07 01:19:55.007811 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-07 01:19:55.007817 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-07 01:19:55.007824 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-07 01:19:55.007830 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-07 01:19:55.007835 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-07 01:19:55.007840 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-07 01:19:55.007845 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-07 01:19:55.007851 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-07 01:19:55.007856 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-07 01:19:55.007861 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-07 01:19:55.007866 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-07 01:19:55.007871 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-07 01:19:55.007876 | orchestrator | 2026-03-07 01:19:55.007881 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-07 01:19:55.007886 | orchestrator | Saturday 07 March 2026 01:18:10 +0000 (0:00:07.506) 0:03:31.332 ******** 2026-03-07 01:19:55.007892 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-07 01:19:55.007897 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-07 01:19:55.007902 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-07 01:19:55.007907 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-07 01:19:55.007912 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-07 01:19:55.007917 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-07 01:19:55.007922 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-07 01:19:55.007928 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-07 01:19:55.007933 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-07 01:19:55.007938 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-07 01:19:55.007943 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-07 01:19:55.007948 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-07 01:19:55.007953 | orchestrator | 2026-03-07 01:19:55.007958 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-07 01:19:55.007973 | orchestrator | Saturday 07 March 2026 01:18:15 +0000 (0:00:05.908) 0:03:37.240 ******** 2026-03-07 01:19:55.007978 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-07 01:19:55.007990 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-07 01:19:55.007995 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-07 01:19:55.008001 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-07 01:19:55.008010 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-07 01:19:55.008022 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-07 01:19:55.008031 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-07 01:19:55.008039 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-07 01:19:55.008052 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-07 01:19:55.008060 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-07 01:19:55.008069 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-07 01:19:55.008077 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-07 01:19:55.008085 | orchestrator | 2026-03-07 01:19:55.008094 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-07 01:19:55.008103 | orchestrator | Saturday 07 March 2026 01:18:21 +0000 (0:00:05.487) 0:03:42.727 ******** 2026-03-07 01:19:55.008118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.008136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.008146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:19:55.008154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.008162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.008179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:19:55.008187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.008199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.008207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.008215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.008225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.008239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:19:55.008253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.008260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.008290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:19:55.008296 | orchestrator | 2026-03-07 01:19:55.008301 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:19:55.008306 | orchestrator | Saturday 07 March 2026 01:18:25 +0000 (0:00:03.917) 0:03:46.644 ******** 2026-03-07 01:19:55.008311 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:19:55.008317 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:19:55.008323 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:19:55.008332 | orchestrator | 2026-03-07 01:19:55.008341 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-07 01:19:55.008348 | orchestrator | Saturday 07 March 2026 01:18:25 +0000 (0:00:00.367) 0:03:47.012 ******** 2026-03-07 01:19:55.008356 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.008364 | orchestrator | 2026-03-07 01:19:55.008373 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-07 01:19:55.008381 | orchestrator | Saturday 07 March 2026 01:18:28 +0000 (0:00:02.400) 0:03:49.413 ******** 2026-03-07 01:19:55.008389 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.008398 | orchestrator | 2026-03-07 01:19:55.008405 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-07 01:19:55.008413 | orchestrator | Saturday 07 March 2026 01:18:30 +0000 (0:00:02.560) 0:03:51.973 ******** 2026-03-07 01:19:55.008422 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.008431 | orchestrator | 2026-03-07 01:19:55.008439 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-07 01:19:55.008454 | orchestrator | Saturday 07 March 2026 01:18:33 +0000 (0:00:03.016) 0:03:54.990 ******** 2026-03-07 01:19:55.008460 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.008465 | orchestrator | 2026-03-07 01:19:55.008470 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-07 01:19:55.008475 | orchestrator | Saturday 07 March 2026 01:18:37 +0000 (0:00:03.660) 0:03:58.650 ******** 2026-03-07 01:19:55.008480 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.008485 | orchestrator | 2026-03-07 01:19:55.008490 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-07 01:19:55.008495 | orchestrator | Saturday 07 March 2026 01:19:01 +0000 (0:00:23.840) 0:04:22.490 ******** 2026-03-07 01:19:55.008500 | orchestrator | 2026-03-07 01:19:55.008505 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-07 01:19:55.008510 | orchestrator | Saturday 07 March 2026 01:19:01 +0000 (0:00:00.078) 0:04:22.569 ******** 2026-03-07 01:19:55.008515 | orchestrator | 2026-03-07 01:19:55.008521 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-07 01:19:55.008526 | orchestrator | Saturday 07 March 2026 01:19:01 +0000 (0:00:00.091) 0:04:22.660 ******** 2026-03-07 01:19:55.008531 | orchestrator | 2026-03-07 01:19:55.008536 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-07 01:19:55.008541 | orchestrator | Saturday 07 March 2026 01:19:01 +0000 (0:00:00.075) 0:04:22.735 ******** 2026-03-07 01:19:55.008546 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.008551 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.008556 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.008561 | orchestrator | 2026-03-07 01:19:55.008566 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-07 01:19:55.008571 | orchestrator | Saturday 07 March 2026 01:19:18 +0000 (0:00:17.047) 0:04:39.783 ******** 2026-03-07 01:19:55.008576 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.008581 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.008586 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.008591 | orchestrator | 2026-03-07 01:19:55.008596 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-07 01:19:55.008601 | orchestrator | Saturday 07 March 2026 01:19:26 +0000 (0:00:08.483) 0:04:48.267 ******** 2026-03-07 01:19:55.008607 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.008612 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.008617 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.008622 | orchestrator | 2026-03-07 01:19:55.008631 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-07 01:19:55.008636 | orchestrator | Saturday 07 March 2026 01:19:36 +0000 (0:00:09.184) 0:04:57.451 ******** 2026-03-07 01:19:55.008641 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.008647 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.008652 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.008657 | orchestrator | 2026-03-07 01:19:55.008662 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-07 01:19:55.008667 | orchestrator | Saturday 07 March 2026 01:19:44 +0000 (0:00:08.582) 0:05:06.034 ******** 2026-03-07 01:19:55.008672 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:19:55.008677 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:19:55.008682 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:19:55.008687 | orchestrator | 2026-03-07 01:19:55.008692 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:19:55.008698 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:19:55.008704 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:19:55.008709 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:19:55.008718 | orchestrator | 2026-03-07 01:19:55.008723 | orchestrator | 2026-03-07 01:19:55.008728 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:19:55.008733 | orchestrator | Saturday 07 March 2026 01:19:53 +0000 (0:00:08.711) 0:05:14.745 ******** 2026-03-07 01:19:55.008742 | orchestrator | =============================================================================== 2026-03-07 01:19:55.008747 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.84s 2026-03-07 01:19:55.008752 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 23.39s 2026-03-07 01:19:55.008757 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.15s 2026-03-07 01:19:55.008762 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.05s 2026-03-07 01:19:55.008767 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.67s 2026-03-07 01:19:55.008772 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.10s 2026-03-07 01:19:55.008777 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 9.18s 2026-03-07 01:19:55.008782 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.72s 2026-03-07 01:19:55.008787 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.71s 2026-03-07 01:19:55.008792 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.58s 2026-03-07 01:19:55.008797 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.48s 2026-03-07 01:19:55.008802 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 7.51s 2026-03-07 01:19:55.008807 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.50s 2026-03-07 01:19:55.008812 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.37s 2026-03-07 01:19:55.008817 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.15s 2026-03-07 01:19:55.008822 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.91s 2026-03-07 01:19:55.008827 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.81s 2026-03-07 01:19:55.008832 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.76s 2026-03-07 01:19:55.008837 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.67s 2026-03-07 01:19:55.008842 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.56s 2026-03-07 01:19:58.043841 | orchestrator | 2026-03-07 01:19:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:01.080423 | orchestrator | 2026-03-07 01:20:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:04.123283 | orchestrator | 2026-03-07 01:20:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:07.166890 | orchestrator | 2026-03-07 01:20:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:10.208094 | orchestrator | 2026-03-07 01:20:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:13.244675 | orchestrator | 2026-03-07 01:20:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:16.279178 | orchestrator | 2026-03-07 01:20:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:19.327657 | orchestrator | 2026-03-07 01:20:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:22.370566 | orchestrator | 2026-03-07 01:20:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:25.414971 | orchestrator | 2026-03-07 01:20:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:28.457679 | orchestrator | 2026-03-07 01:20:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:31.491811 | orchestrator | 2026-03-07 01:20:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:34.536860 | orchestrator | 2026-03-07 01:20:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:37.586870 | orchestrator | 2026-03-07 01:20:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:40.629608 | orchestrator | 2026-03-07 01:20:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:43.670452 | orchestrator | 2026-03-07 01:20:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:46.704403 | orchestrator | 2026-03-07 01:20:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:49.743995 | orchestrator | 2026-03-07 01:20:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:52.775866 | orchestrator | 2026-03-07 01:20:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:20:55.812388 | orchestrator | 2026-03-07 01:20:56.141641 | orchestrator | 2026-03-07 01:20:56.146229 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Mar 7 01:20:56 UTC 2026 2026-03-07 01:20:56.146317 | orchestrator | 2026-03-07 01:20:56.452555 | orchestrator | ok: Runtime: 0:40:29.405786 2026-03-07 01:20:56.729458 | 2026-03-07 01:20:56.729645 | TASK [Bootstrap services] 2026-03-07 01:20:57.504847 | orchestrator | 2026-03-07 01:20:57.505070 | orchestrator | # BOOTSTRAP 2026-03-07 01:20:57.505086 | orchestrator | 2026-03-07 01:20:57.505093 | orchestrator | + set -e 2026-03-07 01:20:57.505101 | orchestrator | + echo 2026-03-07 01:20:57.505107 | orchestrator | + echo '# BOOTSTRAP' 2026-03-07 01:20:57.505117 | orchestrator | + echo 2026-03-07 01:20:57.505141 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-07 01:20:57.511394 | orchestrator | + set -e 2026-03-07 01:20:57.511478 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-07 01:21:03.196402 | orchestrator | 2026-03-07 01:21:03 | INFO  | It takes a moment until task 0ef47d5f-2502-4b3c-be46-d62bd4a41817 (flavor-manager) has been started and output is visible here. 2026-03-07 01:21:12.260135 | orchestrator | 2026-03-07 01:21:07 | INFO  | Flavor SCS-1L-1 created 2026-03-07 01:21:12.260283 | orchestrator | 2026-03-07 01:21:07 | INFO  | Flavor SCS-1L-1-5 created 2026-03-07 01:21:12.260299 | orchestrator | 2026-03-07 01:21:07 | INFO  | Flavor SCS-1V-2 created 2026-03-07 01:21:12.260307 | orchestrator | 2026-03-07 01:21:07 | INFO  | Flavor SCS-1V-2-5 created 2026-03-07 01:21:12.260313 | orchestrator | 2026-03-07 01:21:07 | INFO  | Flavor SCS-1V-4 created 2026-03-07 01:21:12.260320 | orchestrator | 2026-03-07 01:21:08 | INFO  | Flavor SCS-1V-4-10 created 2026-03-07 01:21:12.260326 | orchestrator | 2026-03-07 01:21:08 | INFO  | Flavor SCS-1V-8 created 2026-03-07 01:21:12.260334 | orchestrator | 2026-03-07 01:21:08 | INFO  | Flavor SCS-1V-8-20 created 2026-03-07 01:21:12.260353 | orchestrator | 2026-03-07 01:21:08 | INFO  | Flavor SCS-2V-4 created 2026-03-07 01:21:12.260360 | orchestrator | 2026-03-07 01:21:08 | INFO  | Flavor SCS-2V-4-10 created 2026-03-07 01:21:12.260368 | orchestrator | 2026-03-07 01:21:08 | INFO  | Flavor SCS-2V-8 created 2026-03-07 01:21:12.260376 | orchestrator | 2026-03-07 01:21:09 | INFO  | Flavor SCS-2V-8-20 created 2026-03-07 01:21:12.260383 | orchestrator | 2026-03-07 01:21:09 | INFO  | Flavor SCS-2V-16 created 2026-03-07 01:21:12.260391 | orchestrator | 2026-03-07 01:21:09 | INFO  | Flavor SCS-2V-16-50 created 2026-03-07 01:21:12.260398 | orchestrator | 2026-03-07 01:21:09 | INFO  | Flavor SCS-4V-8 created 2026-03-07 01:21:12.260402 | orchestrator | 2026-03-07 01:21:09 | INFO  | Flavor SCS-4V-8-20 created 2026-03-07 01:21:12.260407 | orchestrator | 2026-03-07 01:21:09 | INFO  | Flavor SCS-4V-16 created 2026-03-07 01:21:12.260411 | orchestrator | 2026-03-07 01:21:10 | INFO  | Flavor SCS-4V-16-50 created 2026-03-07 01:21:12.260416 | orchestrator | 2026-03-07 01:21:10 | INFO  | Flavor SCS-4V-32 created 2026-03-07 01:21:12.260420 | orchestrator | 2026-03-07 01:21:10 | INFO  | Flavor SCS-4V-32-100 created 2026-03-07 01:21:12.260424 | orchestrator | 2026-03-07 01:21:10 | INFO  | Flavor SCS-8V-16 created 2026-03-07 01:21:12.260429 | orchestrator | 2026-03-07 01:21:10 | INFO  | Flavor SCS-8V-16-50 created 2026-03-07 01:21:12.260433 | orchestrator | 2026-03-07 01:21:10 | INFO  | Flavor SCS-8V-32 created 2026-03-07 01:21:12.260437 | orchestrator | 2026-03-07 01:21:11 | INFO  | Flavor SCS-8V-32-100 created 2026-03-07 01:21:12.260441 | orchestrator | 2026-03-07 01:21:11 | INFO  | Flavor SCS-16V-32 created 2026-03-07 01:21:12.260445 | orchestrator | 2026-03-07 01:21:11 | INFO  | Flavor SCS-16V-32-100 created 2026-03-07 01:21:12.260450 | orchestrator | 2026-03-07 01:21:11 | INFO  | Flavor SCS-2V-4-20s created 2026-03-07 01:21:12.260454 | orchestrator | 2026-03-07 01:21:11 | INFO  | Flavor SCS-4V-8-50s created 2026-03-07 01:21:12.260458 | orchestrator | 2026-03-07 01:21:11 | INFO  | Flavor SCS-4V-16-100s created 2026-03-07 01:21:12.260462 | orchestrator | 2026-03-07 01:21:11 | INFO  | Flavor SCS-8V-32-100s created 2026-03-07 01:21:15.198090 | orchestrator | 2026-03-07 01:21:15 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-07 01:21:15.210689 | orchestrator | 2026-03-07 01:21:15 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-07 01:21:15.288263 | orchestrator | 2026-03-07 01:21:15 | INFO  | Task 3bb9613e-5db6-43e7-9f07-411ce8417895 (bootstrap-basic) was prepared for execution. 2026-03-07 01:21:15.288376 | orchestrator | 2026-03-07 01:21:15 | INFO  | It takes a moment until task 3bb9613e-5db6-43e7-9f07-411ce8417895 (bootstrap-basic) has been started and output is visible here. 2026-03-07 01:22:08.588699 | orchestrator | 2026-03-07 01:22:08.588818 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-07 01:22:08.588835 | orchestrator | 2026-03-07 01:22:08.588848 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 01:22:08.588860 | orchestrator | Saturday 07 March 2026 01:21:20 +0000 (0:00:00.095) 0:00:00.095 ******** 2026-03-07 01:22:08.588871 | orchestrator | ok: [localhost] 2026-03-07 01:22:08.588884 | orchestrator | 2026-03-07 01:22:08.588895 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-07 01:22:08.588906 | orchestrator | Saturday 07 March 2026 01:21:22 +0000 (0:00:02.231) 0:00:02.326 ******** 2026-03-07 01:22:08.588919 | orchestrator | ok: [localhost] 2026-03-07 01:22:08.588930 | orchestrator | 2026-03-07 01:22:08.588942 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-07 01:22:08.588953 | orchestrator | Saturday 07 March 2026 01:21:34 +0000 (0:00:11.363) 0:00:13.690 ******** 2026-03-07 01:22:08.588964 | orchestrator | changed: [localhost] 2026-03-07 01:22:08.588976 | orchestrator | 2026-03-07 01:22:08.588988 | orchestrator | TASK [Create public network] *************************************************** 2026-03-07 01:22:08.588999 | orchestrator | Saturday 07 March 2026 01:21:42 +0000 (0:00:08.485) 0:00:22.176 ******** 2026-03-07 01:22:08.589010 | orchestrator | changed: [localhost] 2026-03-07 01:22:08.589021 | orchestrator | 2026-03-07 01:22:08.589036 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-07 01:22:08.589048 | orchestrator | Saturday 07 March 2026 01:21:48 +0000 (0:00:05.642) 0:00:27.818 ******** 2026-03-07 01:22:08.589059 | orchestrator | changed: [localhost] 2026-03-07 01:22:08.589071 | orchestrator | 2026-03-07 01:22:08.589104 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-07 01:22:08.589117 | orchestrator | Saturday 07 March 2026 01:21:55 +0000 (0:00:07.031) 0:00:34.850 ******** 2026-03-07 01:22:08.589128 | orchestrator | changed: [localhost] 2026-03-07 01:22:08.589139 | orchestrator | 2026-03-07 01:22:08.589150 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-07 01:22:08.589161 | orchestrator | Saturday 07 March 2026 01:22:00 +0000 (0:00:04.949) 0:00:39.799 ******** 2026-03-07 01:22:08.589172 | orchestrator | changed: [localhost] 2026-03-07 01:22:08.589183 | orchestrator | 2026-03-07 01:22:08.589198 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-07 01:22:08.589223 | orchestrator | Saturday 07 March 2026 01:22:04 +0000 (0:00:04.216) 0:00:44.015 ******** 2026-03-07 01:22:08.589236 | orchestrator | ok: [localhost] 2026-03-07 01:22:08.589249 | orchestrator | 2026-03-07 01:22:08.589262 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:22:08.589276 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:22:08.589291 | orchestrator | 2026-03-07 01:22:08.589302 | orchestrator | 2026-03-07 01:22:08.589313 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:22:08.589324 | orchestrator | Saturday 07 March 2026 01:22:08 +0000 (0:00:03.776) 0:00:47.792 ******** 2026-03-07 01:22:08.589336 | orchestrator | =============================================================================== 2026-03-07 01:22:08.589347 | orchestrator | Get volume type LUKS --------------------------------------------------- 11.36s 2026-03-07 01:22:08.589382 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.49s 2026-03-07 01:22:08.589394 | orchestrator | Set public network to default ------------------------------------------- 7.03s 2026-03-07 01:22:08.589405 | orchestrator | Create public network --------------------------------------------------- 5.64s 2026-03-07 01:22:08.589417 | orchestrator | Create public subnet ---------------------------------------------------- 4.95s 2026-03-07 01:22:08.589428 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.22s 2026-03-07 01:22:08.589440 | orchestrator | Create manager role ----------------------------------------------------- 3.78s 2026-03-07 01:22:08.589451 | orchestrator | Gathering Facts --------------------------------------------------------- 2.23s 2026-03-07 01:22:11.318489 | orchestrator | 2026-03-07 01:22:11 | INFO  | It takes a moment until task 00c70232-6d1a-4bb5-823d-57653aa53c23 (image-manager) has been started and output is visible here. 2026-03-07 01:22:54.160671 | orchestrator | 2026-03-07 01:22:14 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-07 01:22:54.160793 | orchestrator | 2026-03-07 01:22:14 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-07 01:22:54.160808 | orchestrator | 2026-03-07 01:22:14 | INFO  | Importing image Cirros 0.6.2 2026-03-07 01:22:54.160822 | orchestrator | 2026-03-07 01:22:14 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-07 01:22:54.160838 | orchestrator | 2026-03-07 01:22:16 | INFO  | Waiting for image to leave queued state... 2026-03-07 01:22:54.160852 | orchestrator | 2026-03-07 01:22:18 | INFO  | Waiting for import to complete... 2026-03-07 01:22:54.160865 | orchestrator | 2026-03-07 01:22:29 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-07 01:22:54.160884 | orchestrator | 2026-03-07 01:22:29 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-07 01:22:54.160899 | orchestrator | 2026-03-07 01:22:29 | INFO  | Setting internal_version = 0.6.2 2026-03-07 01:22:54.160911 | orchestrator | 2026-03-07 01:22:29 | INFO  | Setting image_original_user = cirros 2026-03-07 01:22:54.160924 | orchestrator | 2026-03-07 01:22:29 | INFO  | Adding tag os:cirros 2026-03-07 01:22:54.160936 | orchestrator | 2026-03-07 01:22:29 | INFO  | Setting property architecture: x86_64 2026-03-07 01:22:54.160950 | orchestrator | 2026-03-07 01:22:29 | INFO  | Setting property hw_disk_bus: scsi 2026-03-07 01:22:54.160961 | orchestrator | 2026-03-07 01:22:30 | INFO  | Setting property hw_rng_model: virtio 2026-03-07 01:22:54.160974 | orchestrator | 2026-03-07 01:22:30 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-07 01:22:54.160986 | orchestrator | 2026-03-07 01:22:30 | INFO  | Setting property hw_watchdog_action: reset 2026-03-07 01:22:54.160999 | orchestrator | 2026-03-07 01:22:30 | INFO  | Setting property hypervisor_type: qemu 2026-03-07 01:22:54.161022 | orchestrator | 2026-03-07 01:22:31 | INFO  | Setting property os_distro: cirros 2026-03-07 01:22:54.161062 | orchestrator | 2026-03-07 01:22:31 | INFO  | Setting property os_purpose: minimal 2026-03-07 01:22:54.161075 | orchestrator | 2026-03-07 01:22:31 | INFO  | Setting property replace_frequency: never 2026-03-07 01:22:54.161088 | orchestrator | 2026-03-07 01:22:31 | INFO  | Setting property uuid_validity: none 2026-03-07 01:22:54.161102 | orchestrator | 2026-03-07 01:22:32 | INFO  | Setting property provided_until: none 2026-03-07 01:22:54.161116 | orchestrator | 2026-03-07 01:22:32 | INFO  | Setting property image_description: Cirros 2026-03-07 01:22:54.161129 | orchestrator | 2026-03-07 01:22:32 | INFO  | Setting property image_name: Cirros 2026-03-07 01:22:54.161168 | orchestrator | 2026-03-07 01:22:32 | INFO  | Setting property internal_version: 0.6.2 2026-03-07 01:22:54.161185 | orchestrator | 2026-03-07 01:22:33 | INFO  | Setting property image_original_user: cirros 2026-03-07 01:22:54.161201 | orchestrator | 2026-03-07 01:22:33 | INFO  | Setting property os_version: 0.6.2 2026-03-07 01:22:54.161220 | orchestrator | 2026-03-07 01:22:33 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-07 01:22:54.161235 | orchestrator | 2026-03-07 01:22:33 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-07 01:22:54.161249 | orchestrator | 2026-03-07 01:22:34 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-07 01:22:54.161262 | orchestrator | 2026-03-07 01:22:34 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-07 01:22:54.161281 | orchestrator | 2026-03-07 01:22:34 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-07 01:22:54.161296 | orchestrator | 2026-03-07 01:22:34 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-07 01:22:54.161311 | orchestrator | 2026-03-07 01:22:34 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-07 01:22:54.161325 | orchestrator | 2026-03-07 01:22:34 | INFO  | Importing image Cirros 0.6.3 2026-03-07 01:22:54.161340 | orchestrator | 2026-03-07 01:22:34 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-07 01:22:54.161354 | orchestrator | 2026-03-07 01:22:35 | INFO  | Waiting for image to leave queued state... 2026-03-07 01:22:54.161368 | orchestrator | 2026-03-07 01:22:37 | INFO  | Waiting for import to complete... 2026-03-07 01:22:54.161407 | orchestrator | 2026-03-07 01:22:47 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-07 01:22:54.161419 | orchestrator | 2026-03-07 01:22:48 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-07 01:22:54.161429 | orchestrator | 2026-03-07 01:22:48 | INFO  | Setting internal_version = 0.6.3 2026-03-07 01:22:54.161438 | orchestrator | 2026-03-07 01:22:48 | INFO  | Setting image_original_user = cirros 2026-03-07 01:22:54.161448 | orchestrator | 2026-03-07 01:22:48 | INFO  | Adding tag os:cirros 2026-03-07 01:22:54.161457 | orchestrator | 2026-03-07 01:22:48 | INFO  | Setting property architecture: x86_64 2026-03-07 01:22:54.161467 | orchestrator | 2026-03-07 01:22:48 | INFO  | Setting property hw_disk_bus: scsi 2026-03-07 01:22:54.161476 | orchestrator | 2026-03-07 01:22:49 | INFO  | Setting property hw_rng_model: virtio 2026-03-07 01:22:54.161486 | orchestrator | 2026-03-07 01:22:49 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-07 01:22:54.161496 | orchestrator | 2026-03-07 01:22:49 | INFO  | Setting property hw_watchdog_action: reset 2026-03-07 01:22:54.161506 | orchestrator | 2026-03-07 01:22:49 | INFO  | Setting property hypervisor_type: qemu 2026-03-07 01:22:54.161515 | orchestrator | 2026-03-07 01:22:50 | INFO  | Setting property os_distro: cirros 2026-03-07 01:22:54.161523 | orchestrator | 2026-03-07 01:22:50 | INFO  | Setting property os_purpose: minimal 2026-03-07 01:22:54.161531 | orchestrator | 2026-03-07 01:22:50 | INFO  | Setting property replace_frequency: never 2026-03-07 01:22:54.161540 | orchestrator | 2026-03-07 01:22:50 | INFO  | Setting property uuid_validity: none 2026-03-07 01:22:54.161548 | orchestrator | 2026-03-07 01:22:51 | INFO  | Setting property provided_until: none 2026-03-07 01:22:54.161556 | orchestrator | 2026-03-07 01:22:51 | INFO  | Setting property image_description: Cirros 2026-03-07 01:22:54.161574 | orchestrator | 2026-03-07 01:22:51 | INFO  | Setting property image_name: Cirros 2026-03-07 01:22:54.161582 | orchestrator | 2026-03-07 01:22:51 | INFO  | Setting property internal_version: 0.6.3 2026-03-07 01:22:54.161590 | orchestrator | 2026-03-07 01:22:52 | INFO  | Setting property image_original_user: cirros 2026-03-07 01:22:54.161598 | orchestrator | 2026-03-07 01:22:52 | INFO  | Setting property os_version: 0.6.3 2026-03-07 01:22:54.161606 | orchestrator | 2026-03-07 01:22:52 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-07 01:22:54.161614 | orchestrator | 2026-03-07 01:22:52 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-07 01:22:54.161622 | orchestrator | 2026-03-07 01:22:53 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-07 01:22:54.161630 | orchestrator | 2026-03-07 01:22:53 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-07 01:22:54.161638 | orchestrator | 2026-03-07 01:22:53 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-07 01:22:54.522593 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-07 01:22:56.997920 | orchestrator | 2026-03-07 01:22:56 | INFO  | date: 2026-03-06 2026-03-07 01:22:56.998213 | orchestrator | 2026-03-07 01:22:56 | INFO  | image: octavia-amphora-haproxy-2024.2.20260306.qcow2 2026-03-07 01:22:56.998283 | orchestrator | 2026-03-07 01:22:56 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260306.qcow2 2026-03-07 01:22:56.998311 | orchestrator | 2026-03-07 01:22:56 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260306.qcow2.CHECKSUM 2026-03-07 01:22:57.136409 | orchestrator | 2026-03-07 01:22:57 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/9bd0fd9b250244a9b798ae8004b80082/work/logs" 2026-03-07 01:23:29.280550 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9bd0fd9b250244a9b798ae8004b80082/work/artifacts" 2026-03-07 01:23:29.555649 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/9bd0fd9b250244a9b798ae8004b80082/work/docs" 2026-03-07 01:23:29.571075 | 2026-03-07 01:23:29.571254 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-07 01:23:30.521404 | orchestrator | changed: .d..t...... ./ 2026-03-07 01:23:30.521717 | orchestrator | changed: All items complete 2026-03-07 01:23:30.521757 | 2026-03-07 01:23:31.244710 | orchestrator | changed: .d..t...... ./ 2026-03-07 01:23:31.991253 | orchestrator | changed: .d..t...... ./ 2026-03-07 01:23:32.015641 | 2026-03-07 01:23:32.015779 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-07 01:23:32.054570 | orchestrator | skipping: Conditional result was False 2026-03-07 01:23:32.057631 | orchestrator | skipping: Conditional result was False 2026-03-07 01:23:32.081362 | 2026-03-07 01:23:32.081474 | PLAY RECAP 2026-03-07 01:23:32.081550 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-07 01:23:32.081588 | 2026-03-07 01:23:32.257776 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-07 01:23:32.260504 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-07 01:23:33.044101 | 2026-03-07 01:23:33.044307 | PLAY [Base post] 2026-03-07 01:23:33.059266 | 2026-03-07 01:23:33.059412 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-07 01:23:34.144881 | orchestrator | changed 2026-03-07 01:23:34.155642 | 2026-03-07 01:23:34.155930 | PLAY RECAP 2026-03-07 01:23:34.156166 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-07 01:23:34.156383 | 2026-03-07 01:23:34.315361 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-07 01:23:34.317893 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-07 01:23:35.106237 | 2026-03-07 01:23:35.106418 | PLAY [Base post-logs] 2026-03-07 01:23:35.117422 | 2026-03-07 01:23:35.117575 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-07 01:23:35.584677 | localhost | changed 2026-03-07 01:23:35.602755 | 2026-03-07 01:23:35.602977 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-07 01:23:35.642233 | localhost | ok 2026-03-07 01:23:35.648238 | 2026-03-07 01:23:35.648396 | TASK [Set zuul-log-path fact] 2026-03-07 01:23:35.666629 | localhost | ok 2026-03-07 01:23:35.685297 | 2026-03-07 01:23:35.685677 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-07 01:23:35.725535 | localhost | ok 2026-03-07 01:23:35.732787 | 2026-03-07 01:23:35.732980 | TASK [upload-logs : Create log directories] 2026-03-07 01:23:36.268925 | localhost | changed 2026-03-07 01:23:36.273902 | 2026-03-07 01:23:36.274072 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-07 01:23:36.797045 | localhost -> localhost | ok: Runtime: 0:00:00.007797 2026-03-07 01:23:36.802109 | 2026-03-07 01:23:36.802262 | TASK [upload-logs : Upload logs to log server] 2026-03-07 01:23:37.383984 | localhost | Output suppressed because no_log was given 2026-03-07 01:23:37.390328 | 2026-03-07 01:23:37.390585 | LOOP [upload-logs : Compress console log and json output] 2026-03-07 01:23:37.453117 | localhost | skipping: Conditional result was False 2026-03-07 01:23:37.458982 | localhost | skipping: Conditional result was False 2026-03-07 01:23:37.467258 | 2026-03-07 01:23:37.467507 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-07 01:23:37.515981 | localhost | skipping: Conditional result was False 2026-03-07 01:23:37.516720 | 2026-03-07 01:23:37.523247 | localhost | skipping: Conditional result was False 2026-03-07 01:23:37.534151 | 2026-03-07 01:23:37.534372 | LOOP [upload-logs : Upload console log and json output]