2026-02-04 00:00:08.367782 | Job console starting 2026-02-04 00:00:08.417231 | Updating git repos 2026-02-04 00:00:08.759542 | Cloning repos into workspace 2026-02-04 00:00:09.086228 | Restoring repo states 2026-02-04 00:00:09.111307 | Merging changes 2026-02-04 00:00:09.111330 | Checking out repos 2026-02-04 00:00:09.722914 | Preparing playbooks 2026-02-04 00:00:10.782368 | Running Ansible setup 2026-02-04 00:00:18.061698 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-04 00:00:19.851088 | 2026-02-04 00:00:19.851199 | PLAY [Base pre] 2026-02-04 00:00:19.893551 | 2026-02-04 00:00:19.893663 | TASK [Setup log path fact] 2026-02-04 00:00:19.935532 | orchestrator | ok 2026-02-04 00:00:19.999146 | 2026-02-04 00:00:19.999274 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-04 00:00:20.050287 | orchestrator | ok 2026-02-04 00:00:20.061674 | 2026-02-04 00:00:20.061769 | TASK [emit-job-header : Print job information] 2026-02-04 00:00:20.222370 | # Job Information 2026-02-04 00:00:20.222510 | Ansible Version: 2.16.14 2026-02-04 00:00:20.222538 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-02-04 00:00:20.222565 | Pipeline: periodic-midnight 2026-02-04 00:00:20.222584 | Executor: 521e9411259a 2026-02-04 00:00:20.222602 | Triggered by: https://github.com/osism/testbed 2026-02-04 00:00:20.222620 | Event ID: ae64838415194271b89fad81bc239d83 2026-02-04 00:00:20.228085 | 2026-02-04 00:00:20.228176 | LOOP [emit-job-header : Print node information] 2026-02-04 00:00:20.529423 | orchestrator | ok: 2026-02-04 00:00:20.529616 | orchestrator | # Node Information 2026-02-04 00:00:20.529646 | orchestrator | Inventory Hostname: orchestrator 2026-02-04 00:00:20.529666 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-04 00:00:20.529684 | orchestrator | Username: zuul-testbed05 2026-02-04 00:00:20.529701 | orchestrator | Distro: Debian 12.13 2026-02-04 00:00:20.529720 | orchestrator | Provider: static-testbed 2026-02-04 00:00:20.529737 | orchestrator | Region: 2026-02-04 00:00:20.529754 | orchestrator | Label: testbed-orchestrator 2026-02-04 00:00:20.529770 | orchestrator | Product Name: OpenStack Nova 2026-02-04 00:00:20.529786 | orchestrator | Interface IP: 81.163.193.140 2026-02-04 00:00:20.551542 | 2026-02-04 00:00:20.551635 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-04 00:00:22.024457 | orchestrator -> localhost | changed 2026-02-04 00:00:22.030705 | 2026-02-04 00:00:22.030797 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-04 00:00:25.270229 | orchestrator -> localhost | changed 2026-02-04 00:00:25.282215 | 2026-02-04 00:00:25.282305 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-04 00:00:26.252236 | orchestrator -> localhost | ok 2026-02-04 00:00:26.257927 | 2026-02-04 00:00:26.258018 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-04 00:00:26.295413 | orchestrator | ok 2026-02-04 00:00:26.318183 | orchestrator | included: /var/lib/zuul/builds/fc9ae95db2ad46a99572c1cde3cf2fd8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-04 00:00:26.332891 | 2026-02-04 00:00:26.332980 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-04 00:00:30.581432 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-04 00:00:30.581587 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/fc9ae95db2ad46a99572c1cde3cf2fd8/work/fc9ae95db2ad46a99572c1cde3cf2fd8_id_rsa 2026-02-04 00:00:30.581619 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/fc9ae95db2ad46a99572c1cde3cf2fd8/work/fc9ae95db2ad46a99572c1cde3cf2fd8_id_rsa.pub 2026-02-04 00:00:30.581640 | orchestrator -> localhost | The key fingerprint is: 2026-02-04 00:00:30.581663 | orchestrator -> localhost | SHA256:wV8ZgupEhwFIx8rK3FHClEaAsxsYsmhtk54xyydYYXA zuul-build-sshkey 2026-02-04 00:00:30.581681 | orchestrator -> localhost | The key's randomart image is: 2026-02-04 00:00:30.581711 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-04 00:00:30.581731 | orchestrator -> localhost | |.+BE+..o .. . | 2026-02-04 00:00:30.581749 | orchestrator -> localhost | |= oO..o.o . o | 2026-02-04 00:00:30.581767 | orchestrator -> localhost | |+*+.=. oo o | 2026-02-04 00:00:30.581783 | orchestrator -> localhost | |*.oX o o . | 2026-02-04 00:00:30.581800 | orchestrator -> localhost | |++B Bo S . | 2026-02-04 00:00:30.581819 | orchestrator -> localhost | |o+ B .. | 2026-02-04 00:00:30.581846 | orchestrator -> localhost | | o | 2026-02-04 00:00:30.581865 | orchestrator -> localhost | | | 2026-02-04 00:00:30.581883 | orchestrator -> localhost | | | 2026-02-04 00:00:30.581900 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-04 00:00:30.581942 | orchestrator -> localhost | ok: Runtime: 0:00:02.787558 2026-02-04 00:00:30.588117 | 2026-02-04 00:00:30.588200 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-04 00:00:30.621696 | orchestrator | ok 2026-02-04 00:00:30.640165 | orchestrator | included: /var/lib/zuul/builds/fc9ae95db2ad46a99572c1cde3cf2fd8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-04 00:00:30.666243 | 2026-02-04 00:00:30.666340 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-04 00:00:30.733163 | orchestrator | skipping: Conditional result was False 2026-02-04 00:00:30.740876 | 2026-02-04 00:00:30.740981 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-04 00:00:31.813499 | orchestrator | changed 2026-02-04 00:00:31.819617 | 2026-02-04 00:00:31.819707 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-04 00:00:32.096178 | orchestrator | ok 2026-02-04 00:00:32.104684 | 2026-02-04 00:00:32.104778 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-04 00:00:32.597975 | orchestrator | ok 2026-02-04 00:00:32.604982 | 2026-02-04 00:00:32.605083 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-04 00:00:33.134439 | orchestrator | ok 2026-02-04 00:00:33.140593 | 2026-02-04 00:00:33.145578 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-04 00:00:33.192376 | orchestrator | skipping: Conditional result was False 2026-02-04 00:00:33.198128 | 2026-02-04 00:00:33.198210 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-04 00:00:34.509382 | orchestrator -> localhost | changed 2026-02-04 00:00:34.522462 | 2026-02-04 00:00:34.522549 | TASK [add-build-sshkey : Add back temp key] 2026-02-04 00:00:35.459656 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/fc9ae95db2ad46a99572c1cde3cf2fd8/work/fc9ae95db2ad46a99572c1cde3cf2fd8_id_rsa (zuul-build-sshkey) 2026-02-04 00:00:35.459885 | orchestrator -> localhost | ok: Runtime: 0:00:00.008355 2026-02-04 00:00:35.465910 | 2026-02-04 00:00:35.466016 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-04 00:00:36.264821 | orchestrator | ok 2026-02-04 00:00:36.269676 | 2026-02-04 00:00:36.269753 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-04 00:00:36.338227 | orchestrator | skipping: Conditional result was False 2026-02-04 00:00:36.440582 | 2026-02-04 00:00:36.440679 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-04 00:00:36.865085 | orchestrator | ok 2026-02-04 00:00:36.879002 | 2026-02-04 00:00:36.879102 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-04 00:00:36.962420 | orchestrator | ok 2026-02-04 00:00:36.974254 | 2026-02-04 00:00:36.974355 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-04 00:00:37.599483 | orchestrator -> localhost | ok 2026-02-04 00:00:37.605878 | 2026-02-04 00:00:37.605975 | TASK [validate-host : Collect information about the host] 2026-02-04 00:00:39.099928 | orchestrator | ok 2026-02-04 00:00:39.137386 | 2026-02-04 00:00:39.137493 | TASK [validate-host : Sanitize hostname] 2026-02-04 00:00:39.296963 | orchestrator | ok 2026-02-04 00:00:39.301446 | 2026-02-04 00:00:39.301529 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-04 00:00:40.586329 | orchestrator -> localhost | changed 2026-02-04 00:00:40.591539 | 2026-02-04 00:00:40.591625 | TASK [validate-host : Collect information about zuul worker] 2026-02-04 00:00:41.056778 | orchestrator | ok 2026-02-04 00:00:41.061234 | 2026-02-04 00:00:41.061320 | TASK [validate-host : Write out all zuul information for each host] 2026-02-04 00:00:42.119150 | orchestrator -> localhost | changed 2026-02-04 00:00:42.128685 | 2026-02-04 00:00:42.128769 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-04 00:00:42.489287 | orchestrator | ok 2026-02-04 00:00:42.494104 | 2026-02-04 00:00:42.494196 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-04 00:01:57.660637 | orchestrator | changed: 2026-02-04 00:01:57.660881 | orchestrator | .d..t...... src/ 2026-02-04 00:01:57.660920 | orchestrator | .d..t...... src/github.com/ 2026-02-04 00:01:57.660946 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-04 00:01:57.660968 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-04 00:01:57.660988 | orchestrator | RedHat.yml 2026-02-04 00:01:57.675948 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-04 00:01:57.675966 | orchestrator | RedHat.yml 2026-02-04 00:01:57.676019 | orchestrator | = 1.53.0"... 2026-02-04 00:02:10.880955 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-04 00:02:11.066531 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-04 00:02:11.753871 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-04 00:02:12.179956 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-04 00:02:13.038767 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-04 00:02:13.109545 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-04 00:02:13.735972 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-04 00:02:13.736026 | orchestrator | 2026-02-04 00:02:13.736033 | orchestrator | Providers are signed by their developers. 2026-02-04 00:02:13.736043 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-04 00:02:13.736047 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-04 00:02:13.736054 | orchestrator | 2026-02-04 00:02:13.736058 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-04 00:02:13.736063 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-04 00:02:13.736073 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-04 00:02:13.736077 | orchestrator | you run "tofu init" in the future. 2026-02-04 00:02:13.736082 | orchestrator | 2026-02-04 00:02:13.736086 | orchestrator | OpenTofu has been successfully initialized! 2026-02-04 00:02:13.736090 | orchestrator | 2026-02-04 00:02:13.736094 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-04 00:02:13.736098 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-04 00:02:13.736102 | orchestrator | should now work. 2026-02-04 00:02:13.736106 | orchestrator | 2026-02-04 00:02:13.736109 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-04 00:02:13.736113 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-04 00:02:13.736118 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-04 00:02:13.943554 | orchestrator | Created and switched to workspace "ci"! 2026-02-04 00:02:13.943638 | orchestrator | 2026-02-04 00:02:13.943654 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-04 00:02:13.943666 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-04 00:02:13.943677 | orchestrator | for this configuration. 2026-02-04 00:02:14.059793 | orchestrator | ci.auto.tfvars 2026-02-04 00:02:14.364409 | orchestrator | default_custom.tf 2026-02-04 00:02:15.931583 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-04 00:02:16.494085 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-04 00:02:16.845758 | orchestrator | 2026-02-04 00:02:16.845820 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-04 00:02:16.845829 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-04 00:02:16.845833 | orchestrator | + create 2026-02-04 00:02:16.845838 | orchestrator | <= read (data resources) 2026-02-04 00:02:16.845843 | orchestrator | 2026-02-04 00:02:16.845848 | orchestrator | OpenTofu will perform the following actions: 2026-02-04 00:02:16.845852 | orchestrator | 2026-02-04 00:02:16.845856 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-04 00:02:16.845860 | orchestrator | # (config refers to values not yet known) 2026-02-04 00:02:16.845864 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-04 00:02:16.845868 | orchestrator | + checksum = (known after apply) 2026-02-04 00:02:16.845872 | orchestrator | + created_at = (known after apply) 2026-02-04 00:02:16.845876 | orchestrator | + file = (known after apply) 2026-02-04 00:02:16.845880 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.845902 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.845907 | orchestrator | + min_disk_gb = (known after apply) 2026-02-04 00:02:16.845911 | orchestrator | + min_ram_mb = (known after apply) 2026-02-04 00:02:16.845914 | orchestrator | + most_recent = true 2026-02-04 00:02:16.845919 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.845923 | orchestrator | + protected = (known after apply) 2026-02-04 00:02:16.845926 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.845933 | orchestrator | + schema = (known after apply) 2026-02-04 00:02:16.845937 | orchestrator | + size_bytes = (known after apply) 2026-02-04 00:02:16.845941 | orchestrator | + tags = (known after apply) 2026-02-04 00:02:16.845945 | orchestrator | + updated_at = (known after apply) 2026-02-04 00:02:16.845949 | orchestrator | } 2026-02-04 00:02:16.845960 | orchestrator | 2026-02-04 00:02:16.845964 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-04 00:02:16.845968 | orchestrator | # (config refers to values not yet known) 2026-02-04 00:02:16.845972 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-04 00:02:16.845976 | orchestrator | + checksum = (known after apply) 2026-02-04 00:02:16.845980 | orchestrator | + created_at = (known after apply) 2026-02-04 00:02:16.845984 | orchestrator | + file = (known after apply) 2026-02-04 00:02:16.845988 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.845991 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.845995 | orchestrator | + min_disk_gb = (known after apply) 2026-02-04 00:02:16.845999 | orchestrator | + min_ram_mb = (known after apply) 2026-02-04 00:02:16.846003 | orchestrator | + most_recent = true 2026-02-04 00:02:16.846007 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.846010 | orchestrator | + protected = (known after apply) 2026-02-04 00:02:16.846035 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.846039 | orchestrator | + schema = (known after apply) 2026-02-04 00:02:16.846042 | orchestrator | + size_bytes = (known after apply) 2026-02-04 00:02:16.846046 | orchestrator | + tags = (known after apply) 2026-02-04 00:02:16.846050 | orchestrator | + updated_at = (known after apply) 2026-02-04 00:02:16.846054 | orchestrator | } 2026-02-04 00:02:16.846057 | orchestrator | 2026-02-04 00:02:16.846061 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-04 00:02:16.846065 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-04 00:02:16.846069 | orchestrator | + content = (known after apply) 2026-02-04 00:02:16.846074 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 00:02:16.846077 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 00:02:16.846081 | orchestrator | + content_md5 = (known after apply) 2026-02-04 00:02:16.846085 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 00:02:16.846089 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 00:02:16.846092 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 00:02:16.846096 | orchestrator | + directory_permission = "0777" 2026-02-04 00:02:16.846100 | orchestrator | + file_permission = "0644" 2026-02-04 00:02:16.846104 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-04 00:02:16.846108 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.846112 | orchestrator | } 2026-02-04 00:02:16.846117 | orchestrator | 2026-02-04 00:02:16.846122 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-04 00:02:16.846125 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-04 00:02:16.846129 | orchestrator | + content = (known after apply) 2026-02-04 00:02:16.846133 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 00:02:16.846137 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 00:02:16.846140 | orchestrator | + content_md5 = (known after apply) 2026-02-04 00:02:16.846144 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 00:02:16.846148 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 00:02:16.846151 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 00:02:16.846155 | orchestrator | + directory_permission = "0777" 2026-02-04 00:02:16.846159 | orchestrator | + file_permission = "0644" 2026-02-04 00:02:16.846186 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-04 00:02:16.846190 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.846194 | orchestrator | } 2026-02-04 00:02:16.847271 | orchestrator | 2026-02-04 00:02:16.847321 | orchestrator | # local_file.inventory will be created 2026-02-04 00:02:16.847328 | orchestrator | + resource "local_file" "inventory" { 2026-02-04 00:02:16.847334 | orchestrator | + content = (known after apply) 2026-02-04 00:02:16.847339 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 00:02:16.847344 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 00:02:16.847348 | orchestrator | + content_md5 = (known after apply) 2026-02-04 00:02:16.847353 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 00:02:16.847360 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 00:02:16.847364 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 00:02:16.847368 | orchestrator | + directory_permission = "0777" 2026-02-04 00:02:16.847372 | orchestrator | + file_permission = "0644" 2026-02-04 00:02:16.847376 | orchestrator | + filename = "inventory.ci" 2026-02-04 00:02:16.847380 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.847389 | orchestrator | } 2026-02-04 00:02:16.847392 | orchestrator | 2026-02-04 00:02:16.847397 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-04 00:02:16.847401 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-04 00:02:16.847405 | orchestrator | + content = (sensitive value) 2026-02-04 00:02:16.847409 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 00:02:16.847413 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 00:02:16.847416 | orchestrator | + content_md5 = (known after apply) 2026-02-04 00:02:16.847420 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 00:02:16.847424 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 00:02:16.847428 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 00:02:16.847431 | orchestrator | + directory_permission = "0700" 2026-02-04 00:02:16.847435 | orchestrator | + file_permission = "0600" 2026-02-04 00:02:16.847439 | orchestrator | + filename = ".id_rsa.ci" 2026-02-04 00:02:16.847443 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.847447 | orchestrator | } 2026-02-04 00:02:16.847450 | orchestrator | 2026-02-04 00:02:16.847454 | orchestrator | # null_resource.node_semaphore will be created 2026-02-04 00:02:16.847458 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-04 00:02:16.847462 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.847465 | orchestrator | } 2026-02-04 00:02:16.847469 | orchestrator | 2026-02-04 00:02:16.847473 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-04 00:02:16.847477 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-04 00:02:16.847481 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.847485 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.847489 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.847496 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.847500 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.847507 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-04 00:02:16.847510 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.847514 | orchestrator | + size = 80 2026-02-04 00:02:16.847518 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.847521 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.847525 | orchestrator | } 2026-02-04 00:02:16.847529 | orchestrator | 2026-02-04 00:02:16.847533 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-04 00:02:16.847537 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.847540 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.847544 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.847548 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.847560 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.847564 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.847568 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-04 00:02:16.847572 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.847575 | orchestrator | + size = 80 2026-02-04 00:02:16.847583 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.847586 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.847590 | orchestrator | } 2026-02-04 00:02:16.847594 | orchestrator | 2026-02-04 00:02:16.847597 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-04 00:02:16.847601 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.847605 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.847609 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.847612 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.847616 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.847620 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.847623 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-04 00:02:16.847627 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.847631 | orchestrator | + size = 80 2026-02-04 00:02:16.847635 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.847638 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.847642 | orchestrator | } 2026-02-04 00:02:16.847646 | orchestrator | 2026-02-04 00:02:16.847649 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-04 00:02:16.847653 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.847657 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.847661 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.847664 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.847668 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.847672 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.847675 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-04 00:02:16.847679 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.847683 | orchestrator | + size = 80 2026-02-04 00:02:16.847686 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.847690 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.847694 | orchestrator | } 2026-02-04 00:02:16.847697 | orchestrator | 2026-02-04 00:02:16.847701 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-04 00:02:16.847705 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.847709 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.847712 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.847725 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.847729 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.847732 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.847739 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-04 00:02:16.847742 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.847746 | orchestrator | + size = 80 2026-02-04 00:02:16.847750 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.847754 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.847757 | orchestrator | } 2026-02-04 00:02:16.847761 | orchestrator | 2026-02-04 00:02:16.847765 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-04 00:02:16.847769 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.847772 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.847776 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.847780 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.847854 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.847858 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.847862 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-04 00:02:16.847865 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.847869 | orchestrator | + size = 80 2026-02-04 00:02:16.847873 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.847876 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.847880 | orchestrator | } 2026-02-04 00:02:16.847884 | orchestrator | 2026-02-04 00:02:16.847887 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-04 00:02:16.847891 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.847895 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.847899 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.847902 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.847906 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.847910 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.847913 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-04 00:02:16.847917 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.847921 | orchestrator | + size = 80 2026-02-04 00:02:16.847924 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.847928 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.847932 | orchestrator | } 2026-02-04 00:02:16.847936 | orchestrator | 2026-02-04 00:02:16.847939 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-04 00:02:16.847944 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.847947 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.847951 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.847955 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.847958 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.847962 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-04 00:02:16.847966 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.847970 | orchestrator | + size = 20 2026-02-04 00:02:16.847973 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.847977 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.847981 | orchestrator | } 2026-02-04 00:02:16.847985 | orchestrator | 2026-02-04 00:02:16.847988 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-04 00:02:16.847992 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.847996 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.847999 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.848003 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.848007 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.848011 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-04 00:02:16.848014 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.848018 | orchestrator | + size = 20 2026-02-04 00:02:16.848022 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.848025 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.848029 | orchestrator | } 2026-02-04 00:02:16.848033 | orchestrator | 2026-02-04 00:02:16.848037 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-04 00:02:16.848040 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.848044 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.848048 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.848051 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.848055 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.848059 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-04 00:02:16.848063 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.848069 | orchestrator | + size = 20 2026-02-04 00:02:16.848073 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.848077 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.848080 | orchestrator | } 2026-02-04 00:02:16.848084 | orchestrator | 2026-02-04 00:02:16.848088 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-04 00:02:16.848092 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.848095 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.848099 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.848103 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.848106 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.848110 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-04 00:02:16.848114 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.848117 | orchestrator | + size = 20 2026-02-04 00:02:16.848121 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.848125 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.848128 | orchestrator | } 2026-02-04 00:02:16.852567 | orchestrator | 2026-02-04 00:02:16.852618 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-04 00:02:16.852624 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.852629 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.852634 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.852638 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.852642 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.852647 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-04 00:02:16.852651 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.852668 | orchestrator | + size = 20 2026-02-04 00:02:16.852673 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.852677 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.852689 | orchestrator | } 2026-02-04 00:02:16.853254 | orchestrator | 2026-02-04 00:02:16.853285 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-04 00:02:16.853290 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.853295 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.853298 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.853302 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.853307 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.853311 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-04 00:02:16.853315 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.853319 | orchestrator | + size = 20 2026-02-04 00:02:16.853323 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.853326 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.853330 | orchestrator | } 2026-02-04 00:02:16.853414 | orchestrator | 2026-02-04 00:02:16.853426 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-04 00:02:16.853430 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.853434 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.853438 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.853441 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.853445 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.853449 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-04 00:02:16.853453 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.853457 | orchestrator | + size = 20 2026-02-04 00:02:16.853461 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.853464 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.853468 | orchestrator | } 2026-02-04 00:02:16.853533 | orchestrator | 2026-02-04 00:02:16.853544 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-04 00:02:16.853549 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.853566 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.853570 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.853574 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.853578 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.853582 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-04 00:02:16.853585 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.853589 | orchestrator | + size = 20 2026-02-04 00:02:16.853594 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.853598 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.853601 | orchestrator | } 2026-02-04 00:02:16.853672 | orchestrator | 2026-02-04 00:02:16.853684 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-04 00:02:16.853689 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.853692 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.853696 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.853700 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.853704 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.853708 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-04 00:02:16.853711 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.853715 | orchestrator | + size = 20 2026-02-04 00:02:16.853719 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.853722 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.853726 | orchestrator | } 2026-02-04 00:02:16.853937 | orchestrator | 2026-02-04 00:02:16.853950 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-04 00:02:16.853955 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-04 00:02:16.853959 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.853962 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.853966 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.853970 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.853974 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.853977 | orchestrator | + config_drive = true 2026-02-04 00:02:16.853981 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.853985 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.853988 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-04 00:02:16.853992 | orchestrator | + force_delete = false 2026-02-04 00:02:16.853996 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.854000 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.854003 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.854007 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.854011 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.854133 | orchestrator | + name = "testbed-manager" 2026-02-04 00:02:16.854138 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.854141 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.854145 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.854149 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.854152 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.854156 | orchestrator | + user_data = (sensitive value) 2026-02-04 00:02:16.854160 | orchestrator | 2026-02-04 00:02:16.854164 | orchestrator | + block_device { 2026-02-04 00:02:16.854184 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.854188 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.854196 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.854200 | orchestrator | + multiattach = false 2026-02-04 00:02:16.854204 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.854208 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.854218 | orchestrator | } 2026-02-04 00:02:16.854222 | orchestrator | 2026-02-04 00:02:16.854226 | orchestrator | + network { 2026-02-04 00:02:16.854229 | orchestrator | + access_network = false 2026-02-04 00:02:16.854233 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.854237 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.854241 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.854244 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.854248 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.854252 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.854256 | orchestrator | } 2026-02-04 00:02:16.854259 | orchestrator | } 2026-02-04 00:02:16.854459 | orchestrator | 2026-02-04 00:02:16.854472 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-04 00:02:16.854477 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.854480 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.854484 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.854488 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.854492 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.854495 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.854499 | orchestrator | + config_drive = true 2026-02-04 00:02:16.854503 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.854507 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.854510 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.854514 | orchestrator | + force_delete = false 2026-02-04 00:02:16.854518 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.854521 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.854525 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.854529 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.854533 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.854536 | orchestrator | + name = "testbed-node-0" 2026-02-04 00:02:16.854540 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.854544 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.854548 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.854551 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.854555 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.854559 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.854563 | orchestrator | 2026-02-04 00:02:16.854566 | orchestrator | + block_device { 2026-02-04 00:02:16.854570 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.854574 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.854578 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.854581 | orchestrator | + multiattach = false 2026-02-04 00:02:16.854585 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.854589 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.854593 | orchestrator | } 2026-02-04 00:02:16.854596 | orchestrator | 2026-02-04 00:02:16.854600 | orchestrator | + network { 2026-02-04 00:02:16.854604 | orchestrator | + access_network = false 2026-02-04 00:02:16.854608 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.854611 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.854615 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.854619 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.854623 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.854626 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.854630 | orchestrator | } 2026-02-04 00:02:16.854634 | orchestrator | } 2026-02-04 00:02:16.854811 | orchestrator | 2026-02-04 00:02:16.854823 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-04 00:02:16.854827 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.854831 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.854838 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.854842 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.854846 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.854850 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.854854 | orchestrator | + config_drive = true 2026-02-04 00:02:16.854857 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.854861 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.854865 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.854869 | orchestrator | + force_delete = false 2026-02-04 00:02:16.854872 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.854876 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.854880 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.854884 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.854887 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.854891 | orchestrator | + name = "testbed-node-1" 2026-02-04 00:02:16.854895 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.854899 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.854902 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.854906 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.854910 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.854914 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.854917 | orchestrator | 2026-02-04 00:02:16.854921 | orchestrator | + block_device { 2026-02-04 00:02:16.854925 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.854929 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.854932 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.854936 | orchestrator | + multiattach = false 2026-02-04 00:02:16.854940 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.854944 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.854947 | orchestrator | } 2026-02-04 00:02:16.854951 | orchestrator | 2026-02-04 00:02:16.854955 | orchestrator | + network { 2026-02-04 00:02:16.854959 | orchestrator | + access_network = false 2026-02-04 00:02:16.854962 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.854966 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.854970 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.854974 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.854977 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.854981 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.854985 | orchestrator | } 2026-02-04 00:02:16.854989 | orchestrator | } 2026-02-04 00:02:16.855180 | orchestrator | 2026-02-04 00:02:16.855193 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-04 00:02:16.855197 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.855201 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.855205 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.855211 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.855215 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.855222 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.855226 | orchestrator | + config_drive = true 2026-02-04 00:02:16.855229 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.855233 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.855237 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.855241 | orchestrator | + force_delete = false 2026-02-04 00:02:16.855244 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.855248 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.855252 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.855259 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.855263 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.855267 | orchestrator | + name = "testbed-node-2" 2026-02-04 00:02:16.855270 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.855274 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.855278 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.855282 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.855285 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.855289 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.855293 | orchestrator | 2026-02-04 00:02:16.855297 | orchestrator | + block_device { 2026-02-04 00:02:16.855301 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.855304 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.855308 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.855312 | orchestrator | + multiattach = false 2026-02-04 00:02:16.855316 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.855319 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.855323 | orchestrator | } 2026-02-04 00:02:16.855327 | orchestrator | 2026-02-04 00:02:16.855331 | orchestrator | + network { 2026-02-04 00:02:16.855335 | orchestrator | + access_network = false 2026-02-04 00:02:16.855338 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.855342 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.855346 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.855350 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.855353 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.855357 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.855361 | orchestrator | } 2026-02-04 00:02:16.855365 | orchestrator | } 2026-02-04 00:02:16.855549 | orchestrator | 2026-02-04 00:02:16.855560 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-04 00:02:16.855565 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.855568 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.855572 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.855576 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.855580 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.855584 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.855588 | orchestrator | + config_drive = true 2026-02-04 00:02:16.855592 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.855595 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.855599 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.855603 | orchestrator | + force_delete = false 2026-02-04 00:02:16.855607 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.855610 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.855614 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.855618 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.855622 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.855625 | orchestrator | + name = "testbed-node-3" 2026-02-04 00:02:16.855629 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.855633 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.855636 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.855640 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.855644 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.855648 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.855651 | orchestrator | 2026-02-04 00:02:16.855655 | orchestrator | + block_device { 2026-02-04 00:02:16.855662 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.855666 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.855669 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.855677 | orchestrator | + multiattach = false 2026-02-04 00:02:16.855681 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.855685 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.855688 | orchestrator | } 2026-02-04 00:02:16.855692 | orchestrator | 2026-02-04 00:02:16.855696 | orchestrator | + network { 2026-02-04 00:02:16.855700 | orchestrator | + access_network = false 2026-02-04 00:02:16.855704 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.855707 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.855711 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.855715 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.855719 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.855722 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.855726 | orchestrator | } 2026-02-04 00:02:16.855730 | orchestrator | } 2026-02-04 00:02:16.855912 | orchestrator | 2026-02-04 00:02:16.855923 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-04 00:02:16.855927 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.855931 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.855935 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.855939 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.855943 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.855947 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.855950 | orchestrator | + config_drive = true 2026-02-04 00:02:16.855954 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.855958 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.855962 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.855965 | orchestrator | + force_delete = false 2026-02-04 00:02:16.855969 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.855973 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.855976 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.855980 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.855984 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.855988 | orchestrator | + name = "testbed-node-4" 2026-02-04 00:02:16.855991 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.855995 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.855999 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.856002 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.856006 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.856010 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.856014 | orchestrator | 2026-02-04 00:02:16.856017 | orchestrator | + block_device { 2026-02-04 00:02:16.856021 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.856025 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.856028 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.856032 | orchestrator | + multiattach = false 2026-02-04 00:02:16.856036 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.856040 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.856043 | orchestrator | } 2026-02-04 00:02:16.856047 | orchestrator | 2026-02-04 00:02:16.856051 | orchestrator | + network { 2026-02-04 00:02:16.856054 | orchestrator | + access_network = false 2026-02-04 00:02:16.856058 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.856062 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.856066 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.856069 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.856073 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.856077 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.856080 | orchestrator | } 2026-02-04 00:02:16.856084 | orchestrator | } 2026-02-04 00:02:16.856319 | orchestrator | 2026-02-04 00:02:16.856333 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-04 00:02:16.856338 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.856342 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.856345 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.856349 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.856353 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.856357 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.856360 | orchestrator | + config_drive = true 2026-02-04 00:02:16.856364 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.856368 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.856371 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.856375 | orchestrator | + force_delete = false 2026-02-04 00:02:16.856382 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.856386 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.856390 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.856393 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.856397 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.856401 | orchestrator | + name = "testbed-node-5" 2026-02-04 00:02:16.856404 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.856408 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.856412 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.856415 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.856419 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.856423 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.856427 | orchestrator | 2026-02-04 00:02:16.856430 | orchestrator | + block_device { 2026-02-04 00:02:16.856434 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.856438 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.856442 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.856445 | orchestrator | + multiattach = false 2026-02-04 00:02:16.856449 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.856453 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.856456 | orchestrator | } 2026-02-04 00:02:16.856460 | orchestrator | 2026-02-04 00:02:16.856464 | orchestrator | + network { 2026-02-04 00:02:16.856467 | orchestrator | + access_network = false 2026-02-04 00:02:16.856471 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.856475 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.856478 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.856482 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.856486 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.856490 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.856493 | orchestrator | } 2026-02-04 00:02:16.856497 | orchestrator | } 2026-02-04 00:02:16.856542 | orchestrator | 2026-02-04 00:02:16.856554 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-04 00:02:16.856558 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-04 00:02:16.856562 | orchestrator | + fingerprint = (known after apply) 2026-02-04 00:02:16.856566 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.856570 | orchestrator | + name = "testbed" 2026-02-04 00:02:16.856573 | orchestrator | + private_key = (sensitive value) 2026-02-04 00:02:16.856577 | orchestrator | + public_key = (known after apply) 2026-02-04 00:02:16.856581 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.856585 | orchestrator | + user_id = (known after apply) 2026-02-04 00:02:16.856588 | orchestrator | } 2026-02-04 00:02:16.856626 | orchestrator | 2026-02-04 00:02:16.856637 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-04 00:02:16.856641 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.856649 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.856653 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.856657 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.856661 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.856665 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.856668 | orchestrator | } 2026-02-04 00:02:16.856704 | orchestrator | 2026-02-04 00:02:16.856714 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-04 00:02:16.856719 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.856723 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.856726 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.856730 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.856734 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.856738 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.856741 | orchestrator | } 2026-02-04 00:02:16.856776 | orchestrator | 2026-02-04 00:02:16.856787 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-04 00:02:16.856791 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.856795 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.856799 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.856802 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.856806 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.856810 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.856814 | orchestrator | } 2026-02-04 00:02:16.856849 | orchestrator | 2026-02-04 00:02:16.856859 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-04 00:02:16.856864 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.856867 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.856871 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.856875 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.856879 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.856882 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.856886 | orchestrator | } 2026-02-04 00:02:16.856918 | orchestrator | 2026-02-04 00:02:16.856929 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-04 00:02:16.856933 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.856937 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.856941 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.856945 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.856951 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.856955 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.856959 | orchestrator | } 2026-02-04 00:02:16.856992 | orchestrator | 2026-02-04 00:02:16.857003 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-04 00:02:16.857007 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.857011 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.857015 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.857018 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.857022 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.857026 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.857030 | orchestrator | } 2026-02-04 00:02:16.857066 | orchestrator | 2026-02-04 00:02:16.857077 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-04 00:02:16.857082 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.857085 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.857089 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.857093 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.857097 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.857104 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.857108 | orchestrator | } 2026-02-04 00:02:16.857145 | orchestrator | 2026-02-04 00:02:16.857155 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-04 00:02:16.857160 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.857163 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.857183 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.857189 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.857195 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.857201 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.857207 | orchestrator | } 2026-02-04 00:02:16.857262 | orchestrator | 2026-02-04 00:02:16.857276 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-04 00:02:16.857285 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.857292 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.857299 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.857305 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.857311 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.857317 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.857322 | orchestrator | } 2026-02-04 00:02:16.857387 | orchestrator | 2026-02-04 00:02:16.857401 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-04 00:02:16.857407 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-04 00:02:16.857411 | orchestrator | + fixed_ip = (known after apply) 2026-02-04 00:02:16.857416 | orchestrator | + floating_ip = (known after apply) 2026-02-04 00:02:16.857423 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.857430 | orchestrator | + port_id = (known after apply) 2026-02-04 00:02:16.857436 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.857443 | orchestrator | } 2026-02-04 00:02:16.857532 | orchestrator | 2026-02-04 00:02:16.857551 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-04 00:02:16.857558 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-04 00:02:16.857565 | orchestrator | + address = (known after apply) 2026-02-04 00:02:16.857571 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.857577 | orchestrator | + dns_domain = (known after apply) 2026-02-04 00:02:16.857582 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.857588 | orchestrator | + fixed_ip = (known after apply) 2026-02-04 00:02:16.857595 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.857600 | orchestrator | + pool = "public" 2026-02-04 00:02:16.857607 | orchestrator | + port_id = (known after apply) 2026-02-04 00:02:16.857613 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.857619 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.857625 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.857631 | orchestrator | } 2026-02-04 00:02:16.857788 | orchestrator | 2026-02-04 00:02:16.857807 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-04 00:02:16.857814 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-04 00:02:16.857820 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.857826 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.857830 | orchestrator | + availability_zone_hints = [ 2026-02-04 00:02:16.857834 | orchestrator | + "nova", 2026-02-04 00:02:16.857838 | orchestrator | ] 2026-02-04 00:02:16.857842 | orchestrator | + dns_domain = (known after apply) 2026-02-04 00:02:16.857846 | orchestrator | + external = (known after apply) 2026-02-04 00:02:16.857850 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.857854 | orchestrator | + mtu = (known after apply) 2026-02-04 00:02:16.857857 | orchestrator | + name = "net-testbed-management" 2026-02-04 00:02:16.857861 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.857870 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.857874 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.857878 | orchestrator | + shared = (known after apply) 2026-02-04 00:02:16.857882 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.857886 | orchestrator | + transparent_vlan = (known after apply) 2026-02-04 00:02:16.857890 | orchestrator | 2026-02-04 00:02:16.857894 | orchestrator | + segments (known after apply) 2026-02-04 00:02:16.857898 | orchestrator | } 2026-02-04 00:02:16.869994 | orchestrator | 2026-02-04 00:02:16.870129 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-04 00:02:16.870138 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-04 00:02:16.870144 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.870149 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.870154 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.870189 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.870194 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.870199 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.870204 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.870209 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.870214 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.870219 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.870224 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.870229 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.870234 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.870238 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.870243 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.870248 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.870253 | orchestrator | 2026-02-04 00:02:16.870258 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.870263 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.870268 | orchestrator | } 2026-02-04 00:02:16.870272 | orchestrator | 2026-02-04 00:02:16.870277 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.870282 | orchestrator | 2026-02-04 00:02:16.870287 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.870293 | orchestrator | + ip_address = "192.168.16.5" 2026-02-04 00:02:16.870298 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.870302 | orchestrator | } 2026-02-04 00:02:16.870307 | orchestrator | } 2026-02-04 00:02:16.870545 | orchestrator | 2026-02-04 00:02:16.870562 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-04 00:02:16.870568 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.870573 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.870577 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.870582 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.870586 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.870591 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.870595 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.870600 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.870604 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.870609 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.870613 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.870618 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.870622 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.870627 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.870632 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.870647 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.870652 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.870657 | orchestrator | 2026-02-04 00:02:16.870661 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.870666 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.870671 | orchestrator | } 2026-02-04 00:02:16.870675 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.870680 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.870685 | orchestrator | } 2026-02-04 00:02:16.870689 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.870694 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.870698 | orchestrator | } 2026-02-04 00:02:16.870703 | orchestrator | 2026-02-04 00:02:16.870708 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.870712 | orchestrator | 2026-02-04 00:02:16.870717 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.870721 | orchestrator | + ip_address = "192.168.16.10" 2026-02-04 00:02:16.870726 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.870731 | orchestrator | } 2026-02-04 00:02:16.870735 | orchestrator | } 2026-02-04 00:02:16.870927 | orchestrator | 2026-02-04 00:02:16.870942 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-04 00:02:16.870948 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.870952 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.870957 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.870962 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.870966 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.870971 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.870975 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.870980 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.870984 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.870989 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.870993 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.870998 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.871002 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.871007 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.871011 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.871016 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.871021 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.871025 | orchestrator | 2026-02-04 00:02:16.871030 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.871034 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.871039 | orchestrator | } 2026-02-04 00:02:16.871043 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.871048 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.871053 | orchestrator | } 2026-02-04 00:02:16.871057 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.871062 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.871066 | orchestrator | } 2026-02-04 00:02:16.871071 | orchestrator | 2026-02-04 00:02:16.871076 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.871080 | orchestrator | 2026-02-04 00:02:16.871085 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.871089 | orchestrator | + ip_address = "192.168.16.11" 2026-02-04 00:02:16.871094 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.871098 | orchestrator | } 2026-02-04 00:02:16.871103 | orchestrator | } 2026-02-04 00:02:16.871287 | orchestrator | 2026-02-04 00:02:16.871302 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-04 00:02:16.871307 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.871312 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.871317 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.871321 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.871326 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.871340 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.871345 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.871349 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.871354 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.871362 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.871367 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.871372 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.871376 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.871381 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.871385 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.871390 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.871394 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.871399 | orchestrator | 2026-02-04 00:02:16.871404 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.871408 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.871413 | orchestrator | } 2026-02-04 00:02:16.871418 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.871422 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.871427 | orchestrator | } 2026-02-04 00:02:16.871431 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.871436 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.871440 | orchestrator | } 2026-02-04 00:02:16.871445 | orchestrator | 2026-02-04 00:02:16.871450 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.871454 | orchestrator | 2026-02-04 00:02:16.871459 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.871464 | orchestrator | + ip_address = "192.168.16.12" 2026-02-04 00:02:16.871468 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.871473 | orchestrator | } 2026-02-04 00:02:16.871477 | orchestrator | } 2026-02-04 00:02:16.871651 | orchestrator | 2026-02-04 00:02:16.871666 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-04 00:02:16.871671 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.871676 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.871681 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.871685 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.871690 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.871694 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.871699 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.871703 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.871708 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.871713 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.871717 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.871722 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.871726 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.871731 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.871735 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.871740 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.871744 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.871749 | orchestrator | 2026-02-04 00:02:16.871754 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.871758 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.871763 | orchestrator | } 2026-02-04 00:02:16.871768 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.871772 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.871777 | orchestrator | } 2026-02-04 00:02:16.871781 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.871786 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.871790 | orchestrator | } 2026-02-04 00:02:16.871795 | orchestrator | 2026-02-04 00:02:16.871804 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.871808 | orchestrator | 2026-02-04 00:02:16.871813 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.871817 | orchestrator | + ip_address = "192.168.16.13" 2026-02-04 00:02:16.871822 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.871827 | orchestrator | } 2026-02-04 00:02:16.871831 | orchestrator | } 2026-02-04 00:02:16.871993 | orchestrator | 2026-02-04 00:02:16.872007 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-04 00:02:16.872013 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.872017 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.872022 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.872026 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.872031 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.872036 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.872040 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.872045 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.872049 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.872054 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.872058 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.872063 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.872068 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.872072 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.872077 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.872081 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.872086 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.872093 | orchestrator | 2026-02-04 00:02:16.872098 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.872102 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.872107 | orchestrator | } 2026-02-04 00:02:16.872112 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.872116 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.872121 | orchestrator | } 2026-02-04 00:02:16.872125 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.872130 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.872134 | orchestrator | } 2026-02-04 00:02:16.872139 | orchestrator | 2026-02-04 00:02:16.872144 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.872148 | orchestrator | 2026-02-04 00:02:16.872153 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.872157 | orchestrator | + ip_address = "192.168.16.14" 2026-02-04 00:02:16.872162 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.872179 | orchestrator | } 2026-02-04 00:02:16.872184 | orchestrator | } 2026-02-04 00:02:16.872343 | orchestrator | 2026-02-04 00:02:16.872356 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-04 00:02:16.872362 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.872366 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.872371 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.872375 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.872380 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.872384 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.872389 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.872393 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.872398 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.872403 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.872407 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.872412 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.872416 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.872421 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.872429 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.872434 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.872439 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.872443 | orchestrator | 2026-02-04 00:02:16.872448 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.872452 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.872457 | orchestrator | } 2026-02-04 00:02:16.872461 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.872466 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.872470 | orchestrator | } 2026-02-04 00:02:16.872475 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.872479 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.872484 | orchestrator | } 2026-02-04 00:02:16.872488 | orchestrator | 2026-02-04 00:02:16.872496 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.872501 | orchestrator | 2026-02-04 00:02:16.872506 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.872510 | orchestrator | + ip_address = "192.168.16.15" 2026-02-04 00:02:16.872515 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.872520 | orchestrator | } 2026-02-04 00:02:16.872524 | orchestrator | } 2026-02-04 00:02:16.872576 | orchestrator | 2026-02-04 00:02:16.872589 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-04 00:02:16.872594 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-04 00:02:16.872599 | orchestrator | + force_destroy = false 2026-02-04 00:02:16.872604 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.872608 | orchestrator | + port_id = (known after apply) 2026-02-04 00:02:16.872613 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.872618 | orchestrator | + router_id = (known after apply) 2026-02-04 00:02:16.872622 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.872627 | orchestrator | } 2026-02-04 00:02:16.872724 | orchestrator | 2026-02-04 00:02:16.872738 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-04 00:02:16.872743 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-04 00:02:16.872747 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.872752 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.872757 | orchestrator | + availability_zone_hints = [ 2026-02-04 00:02:16.872761 | orchestrator | + "nova", 2026-02-04 00:02:16.872766 | orchestrator | ] 2026-02-04 00:02:16.872770 | orchestrator | + distributed = (known after apply) 2026-02-04 00:02:16.872775 | orchestrator | + enable_snat = (known after apply) 2026-02-04 00:02:16.872779 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-04 00:02:16.872784 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-04 00:02:16.872789 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.872793 | orchestrator | + name = "testbed" 2026-02-04 00:02:16.872798 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.872802 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.872807 | orchestrator | 2026-02-04 00:02:16.872811 | orchestrator | + external_fixed_ip (known after apply) 2026-02-04 00:02:16.872816 | orchestrator | } 2026-02-04 00:02:16.872911 | orchestrator | 2026-02-04 00:02:16.872925 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-04 00:02:16.872931 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-04 00:02:16.872936 | orchestrator | + description = "ssh" 2026-02-04 00:02:16.872941 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.872945 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.872950 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.872954 | orchestrator | + port_range_max = 22 2026-02-04 00:02:16.872959 | orchestrator | + port_range_min = 22 2026-02-04 00:02:16.872964 | orchestrator | + protocol = "tcp" 2026-02-04 00:02:16.872968 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.872977 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.872981 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.872986 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.872990 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.872995 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.873000 | orchestrator | } 2026-02-04 00:02:16.873086 | orchestrator | 2026-02-04 00:02:16.873099 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-04 00:02:16.873105 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-04 00:02:16.873109 | orchestrator | + description = "wireguard" 2026-02-04 00:02:16.873114 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.873119 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.873123 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.873128 | orchestrator | + port_range_max = 51820 2026-02-04 00:02:16.873132 | orchestrator | + port_range_min = 51820 2026-02-04 00:02:16.873137 | orchestrator | + protocol = "udp" 2026-02-04 00:02:16.873141 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.873146 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.873150 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.873155 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.873159 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.873164 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.873213 | orchestrator | } 2026-02-04 00:02:16.873295 | orchestrator | 2026-02-04 00:02:16.873309 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-04 00:02:16.873315 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-04 00:02:16.873319 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.873324 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.873329 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.873333 | orchestrator | + protocol = "tcp" 2026-02-04 00:02:16.873338 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.873342 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.873347 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.873351 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-04 00:02:16.873356 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.873361 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.873365 | orchestrator | } 2026-02-04 00:02:16.873441 | orchestrator | 2026-02-04 00:02:16.873454 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-04 00:02:16.873460 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-04 00:02:16.873464 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.873469 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.873473 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.873478 | orchestrator | + protocol = "udp" 2026-02-04 00:02:16.873482 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.873487 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.873491 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.873496 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-04 00:02:16.873500 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.873505 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.873510 | orchestrator | } 2026-02-04 00:02:16.873579 | orchestrator | 2026-02-04 00:02:16.873592 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-04 00:02:16.873602 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-04 00:02:16.873607 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.873611 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.873616 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.873620 | orchestrator | + protocol = "icmp" 2026-02-04 00:02:16.873625 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.873629 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.873634 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.873639 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.873643 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.873648 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.873652 | orchestrator | } 2026-02-04 00:02:16.873735 | orchestrator | 2026-02-04 00:02:16.873748 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-04 00:02:16.873753 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-04 00:02:16.873757 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.873761 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.873765 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.873769 | orchestrator | + protocol = "tcp" 2026-02-04 00:02:16.873773 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.873777 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.873785 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.873789 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.873794 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.873798 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.873802 | orchestrator | } 2026-02-04 00:02:16.873869 | orchestrator | 2026-02-04 00:02:16.873882 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-04 00:02:16.873886 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-04 00:02:16.873891 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.873895 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.873899 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.873903 | orchestrator | + protocol = "udp" 2026-02-04 00:02:16.873907 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.873911 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.873915 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.873919 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.873923 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.873928 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.873932 | orchestrator | } 2026-02-04 00:02:16.873997 | orchestrator | 2026-02-04 00:02:16.874009 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-04 00:02:16.874030 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-04 00:02:16.874035 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.874042 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.874046 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.874050 | orchestrator | + protocol = "icmp" 2026-02-04 00:02:16.874054 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.874058 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.874062 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.874067 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.874071 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.874075 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.874083 | orchestrator | } 2026-02-04 00:02:16.874157 | orchestrator | 2026-02-04 00:02:16.874212 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-04 00:02:16.874218 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-04 00:02:16.874222 | orchestrator | + description = "vrrp" 2026-02-04 00:02:16.874226 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.874230 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.874235 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.874239 | orchestrator | + protocol = "112" 2026-02-04 00:02:16.874243 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.874247 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.874251 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.874256 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.874260 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.874264 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.874268 | orchestrator | } 2026-02-04 00:02:16.874323 | orchestrator | 2026-02-04 00:02:16.874335 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-04 00:02:16.874340 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-04 00:02:16.874344 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.874349 | orchestrator | + description = "management security group" 2026-02-04 00:02:16.874353 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.874357 | orchestrator | + name = "testbed-management" 2026-02-04 00:02:16.874361 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.874365 | orchestrator | + stateful = (known after apply) 2026-02-04 00:02:16.874369 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.874373 | orchestrator | } 2026-02-04 00:02:16.874426 | orchestrator | 2026-02-04 00:02:16.874438 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-04 00:02:16.874443 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-04 00:02:16.874447 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.874452 | orchestrator | + description = "node security group" 2026-02-04 00:02:16.874456 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.874460 | orchestrator | + name = "testbed-node" 2026-02-04 00:02:16.874464 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.874468 | orchestrator | + stateful = (known after apply) 2026-02-04 00:02:16.874472 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.874476 | orchestrator | } 2026-02-04 00:02:16.874590 | orchestrator | 2026-02-04 00:02:16.874602 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-04 00:02:16.874607 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-04 00:02:16.874611 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.874615 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-04 00:02:16.874620 | orchestrator | + dns_nameservers = [ 2026-02-04 00:02:16.874624 | orchestrator | + "8.8.8.8", 2026-02-04 00:02:16.874628 | orchestrator | + "9.9.9.9", 2026-02-04 00:02:16.874633 | orchestrator | ] 2026-02-04 00:02:16.874637 | orchestrator | + enable_dhcp = true 2026-02-04 00:02:16.874641 | orchestrator | + gateway_ip = (known after apply) 2026-02-04 00:02:16.874645 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.874649 | orchestrator | + ip_version = 4 2026-02-04 00:02:16.874654 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-04 00:02:16.874658 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-04 00:02:16.874662 | orchestrator | + name = "subnet-testbed-management" 2026-02-04 00:02:16.874666 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.874670 | orchestrator | + no_gateway = false 2026-02-04 00:02:16.874674 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.874678 | orchestrator | + service_types = (known after apply) 2026-02-04 00:02:16.874687 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.874691 | orchestrator | 2026-02-04 00:02:16.874695 | orchestrator | + allocation_pool { 2026-02-04 00:02:16.874699 | orchestrator | + end = "192.168.31.250" 2026-02-04 00:02:16.874704 | orchestrator | + start = "192.168.31.200" 2026-02-04 00:02:16.874708 | orchestrator | } 2026-02-04 00:02:16.874712 | orchestrator | } 2026-02-04 00:02:16.874745 | orchestrator | 2026-02-04 00:02:16.874757 | orchestrator | # terraform_data.image will be created 2026-02-04 00:02:16.874761 | orchestrator | + resource "terraform_data" "image" { 2026-02-04 00:02:16.874765 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.874770 | orchestrator | + input = "Ubuntu 24.04" 2026-02-04 00:02:16.874774 | orchestrator | + output = (known after apply) 2026-02-04 00:02:16.874778 | orchestrator | } 2026-02-04 00:02:16.874809 | orchestrator | 2026-02-04 00:02:16.874821 | orchestrator | # terraform_data.image_node will be created 2026-02-04 00:02:16.874825 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-04 00:02:16.874830 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.874834 | orchestrator | + input = "Ubuntu 24.04" 2026-02-04 00:02:16.874838 | orchestrator | + output = (known after apply) 2026-02-04 00:02:16.874842 | orchestrator | } 2026-02-04 00:02:16.874858 | orchestrator | 2026-02-04 00:02:16.874863 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-04 00:02:16.874875 | orchestrator | 2026-02-04 00:02:16.874880 | orchestrator | Changes to Outputs: 2026-02-04 00:02:16.874891 | orchestrator | + manager_address = (sensitive value) 2026-02-04 00:02:16.874896 | orchestrator | + private_key = (sensitive value) 2026-02-04 00:02:16.994067 | orchestrator | terraform_data.image: Creating... 2026-02-04 00:02:17.110256 | orchestrator | terraform_data.image: Creation complete after 0s [id=be6a6b80-41a0-c063-36bc-41f83ccea6fe] 2026-02-04 00:02:17.110327 | orchestrator | terraform_data.image_node: Creating... 2026-02-04 00:02:17.115247 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=af00380c-fd86-f8fe-907d-b7b6bc571dd4] 2026-02-04 00:02:17.143194 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-04 00:02:17.143266 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-04 00:02:17.156537 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-04 00:02:17.157711 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-04 00:02:17.162240 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-04 00:02:17.162282 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-04 00:02:17.162287 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-04 00:02:17.162291 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-04 00:02:17.162296 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-04 00:02:17.168121 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-04 00:02:17.697801 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-04 00:02:17.704053 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-04 00:02:17.706781 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-04 00:02:17.710941 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-04 00:02:17.716601 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-04 00:02:17.722588 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-04 00:02:18.340753 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=8d288664-0af8-4a72-8994-e3f3f83ba052] 2026-02-04 00:02:18.351674 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-04 00:02:20.790456 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=194ead4c-37cf-4237-b5c2-bf752e6bc508] 2026-02-04 00:02:20.802624 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-04 00:02:20.837694 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=d0af9621-3ff0-4b28-b816-705c9ef71a8d] 2026-02-04 00:02:20.850587 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-04 00:02:20.857453 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=4d16a31e1aec4209cf396d28e16f15301704d2b9] 2026-02-04 00:02:20.857785 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=27db8536-d7cf-467f-b2b6-f0129584608d] 2026-02-04 00:02:20.862405 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=fd9a253f-e742-4747-9193-aa6fcde93089] 2026-02-04 00:02:20.867198 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-04 00:02:20.868083 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-04 00:02:20.870487 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-04 00:02:20.881183 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=ba01a385-e2e8-43d7-8237-fc6e15a9de89] 2026-02-04 00:02:20.887501 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=54fed4a3-dd06-43ea-9731-a81abbed62bd] 2026-02-04 00:02:20.889614 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-04 00:02:20.893886 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-04 00:02:20.977711 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=092c1f4e-b194-45a0-a7eb-d90ae37efda4] 2026-02-04 00:02:20.992266 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-04 00:02:21.000944 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=0b0a5800eaffe105edb72780e8aa203ebefcc919] 2026-02-04 00:02:21.008066 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-04 00:02:21.022315 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=03b06afa-7fea-4d7e-bf2e-7215727f5f52] 2026-02-04 00:02:21.226350 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=7fd0bd10-abd8-4e0c-8290-4705cc531d08] 2026-02-04 00:02:21.819603 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=8eaaedd3-72e9-4413-bbdf-de8fcd38f040] 2026-02-04 00:02:21.937529 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=c1c48e78-6dad-495c-be68-794087f1b2a6] 2026-02-04 00:02:21.944471 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-04 00:02:24.184138 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=3f45c181-f890-4932-9a09-2e0bc4fa8f14] 2026-02-04 00:02:24.318686 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=672dc836-5b98-47e8-81c3-e5596cac2995] 2026-02-04 00:02:24.321601 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=739b3430-b44e-4a37-a610-d4b8eb445a30] 2026-02-04 00:02:24.345188 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=b9850f37-5fe6-4942-bfdc-bc374f48b750] 2026-02-04 00:02:24.462354 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=80bce2bb-e18d-4255-9d30-172ea54b11f6] 2026-02-04 00:02:24.483991 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=a371d88b-21aa-46d0-9a00-c59fe370106e] 2026-02-04 00:02:27.082056 | orchestrator | openstack_networking_router_v2.router: Creation complete after 5s [id=7a8b1dde-120f-4650-a7c2-1ed8945b7e14] 2026-02-04 00:02:27.275284 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-04 00:02:27.275356 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-04 00:02:27.275371 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-04 00:02:27.306895 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=85b4d8b6-c98b-4fc8-8fc2-0dd04f0ccb27] 2026-02-04 00:02:27.328707 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=db46b804-dac7-4a6c-ba7a-3f094b036b11] 2026-02-04 00:02:27.328970 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-04 00:02:27.329116 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-04 00:02:27.333059 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-04 00:02:27.334115 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-04 00:02:27.334605 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-04 00:02:27.334881 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-04 00:02:27.338090 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-04 00:02:27.338216 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-04 00:02:27.343827 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-04 00:02:27.637642 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=00966e6a-77d5-4247-bd51-fa0772b1c0a5] 2026-02-04 00:02:27.648801 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-04 00:02:27.981835 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=a2e1428b-42b6-4e1f-bd1b-6ed275b63b0e] 2026-02-04 00:02:27.989180 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-04 00:02:28.301711 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=39a8652a-8cca-4650-8895-61ca52f871b7] 2026-02-04 00:02:28.309552 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-04 00:02:28.408007 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=82aa0a76-c1fb-460e-a92c-137279d0c690] 2026-02-04 00:02:28.414767 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-04 00:02:28.524704 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=c3a46693-3036-4a99-9e1d-352a8c0cb50d] 2026-02-04 00:02:28.535982 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-04 00:02:28.602106 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=d2e7ded0-21c8-4ccd-94c3-ad5347142edb] 2026-02-04 00:02:28.608943 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=eff79e66-f199-430e-b760-e143bf4229be] 2026-02-04 00:02:28.611064 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-04 00:02:28.616721 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-04 00:02:28.712891 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=237a3ec1-9cfc-4a75-9f69-9923d8dc810e] 2026-02-04 00:02:28.763696 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=e385ed08-8307-47f2-9bf9-9959c67c7b32] 2026-02-04 00:02:28.871853 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=75e21f95-e620-443c-b9a1-f31a71dc27ad] 2026-02-04 00:02:28.893219 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=41e66fbc-2968-4723-8c20-4cc7d15d1883] 2026-02-04 00:02:28.980735 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=43701acf-95ee-47ae-88c4-c93a49db703c] 2026-02-04 00:02:29.289835 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=cf55a615-6ba3-4b83-839e-0e0622b1c01a] 2026-02-04 00:02:29.359643 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=60afe6a2-af1d-41c5-a7bf-703a6a19cd40] 2026-02-04 00:02:29.572650 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 3s [id=419f556e-c003-42e8-865a-c1a5e348a999] 2026-02-04 00:02:29.662626 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=9ab7d801-f36f-454c-abf8-99a1ea0d68a4] 2026-02-04 00:02:31.581171 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=5a863670-f6e3-485a-b054-52956c4beeee] 2026-02-04 00:02:31.604022 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-04 00:02:31.612863 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-04 00:02:31.614741 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-04 00:02:31.615803 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-04 00:02:31.627459 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-04 00:02:31.629434 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-04 00:02:31.629884 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-04 00:02:34.073415 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=010311f7-a88b-4da5-8b36-b2f86e3e4d8a] 2026-02-04 00:02:34.086099 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-04 00:02:34.091827 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-04 00:02:34.094933 | orchestrator | local_file.inventory: Creating... 2026-02-04 00:02:34.102086 | orchestrator | local_file.inventory: Creation complete after 0s [id=24baeead01a373c732fa5253f372765c2898d851] 2026-02-04 00:02:34.102133 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=674a0af537b283f065b1b29707164ffebbc6a084] 2026-02-04 00:02:34.884547 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=010311f7-a88b-4da5-8b36-b2f86e3e4d8a] 2026-02-04 00:02:41.615162 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-04 00:02:41.618527 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-04 00:02:41.619627 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-04 00:02:41.628903 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-04 00:02:41.631312 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-04 00:02:41.631375 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-04 00:02:51.616044 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-04 00:02:51.619555 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-04 00:02:51.620684 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-04 00:02:51.630176 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-04 00:02:51.632372 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-04 00:02:51.632566 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-04 00:03:01.618429 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-04 00:03:01.619591 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-04 00:03:01.620860 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-04 00:03:01.631330 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-04 00:03:01.632592 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-04 00:03:01.632649 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-04 00:03:11.621224 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-02-04 00:03:11.621360 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-02-04 00:03:11.621378 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-02-04 00:03:11.631525 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-02-04 00:03:11.632708 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-02-04 00:03:11.632800 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-02-04 00:03:12.284022 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 40s [id=681da7ca-5f4e-4c9b-afc0-8dbcf7b6dc8a] 2026-02-04 00:03:12.388529 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 40s [id=db9773b0-a73f-4a2a-b08c-10a710c5ed53] 2026-02-04 00:03:12.567121 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=be9ea79e-a77b-4004-8bab-0f552d5da8b9] 2026-02-04 00:03:21.623023 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-02-04 00:03:21.633494 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-02-04 00:03:21.633595 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-02-04 00:03:22.585819 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 51s [id=8e774f85-122c-44f2-ad43-4f0856f673d4] 2026-02-04 00:03:22.798242 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=44e8ace2-ca44-4b2e-9cb8-1c5344d03084] 2026-02-04 00:03:31.642109 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-02-04 00:03:41.651332 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m10s elapsed] 2026-02-04 00:03:42.431375 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m10s [id=8e32975b-6686-472d-8e91-6bb725889307] 2026-02-04 00:03:42.452658 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-04 00:03:42.453800 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-04 00:03:42.455210 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-04 00:03:42.456141 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8179974974530052377] 2026-02-04 00:03:42.473592 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-04 00:03:42.474244 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-04 00:03:42.476149 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-04 00:03:42.477318 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-04 00:03:42.479312 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-04 00:03:42.486465 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-04 00:03:42.487115 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-04 00:03:42.493516 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-04 00:03:45.845699 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=8e774f85-122c-44f2-ad43-4f0856f673d4/fd9a253f-e742-4747-9193-aa6fcde93089] 2026-02-04 00:03:45.856621 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=8e32975b-6686-472d-8e91-6bb725889307/ba01a385-e2e8-43d7-8237-fc6e15a9de89] 2026-02-04 00:03:45.883627 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=44e8ace2-ca44-4b2e-9cb8-1c5344d03084/092c1f4e-b194-45a0-a7eb-d90ae37efda4] 2026-02-04 00:03:45.888710 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=8e774f85-122c-44f2-ad43-4f0856f673d4/54fed4a3-dd06-43ea-9731-a81abbed62bd] 2026-02-04 00:03:45.945405 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=44e8ace2-ca44-4b2e-9cb8-1c5344d03084/d0af9621-3ff0-4b28-b816-705c9ef71a8d] 2026-02-04 00:03:45.988391 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=8e32975b-6686-472d-8e91-6bb725889307/194ead4c-37cf-4237-b5c2-bf752e6bc508] 2026-02-04 00:03:52.027671 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=8e774f85-122c-44f2-ad43-4f0856f673d4/03b06afa-7fea-4d7e-bf2e-7215727f5f52] 2026-02-04 00:03:52.049612 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=44e8ace2-ca44-4b2e-9cb8-1c5344d03084/27db8536-d7cf-467f-b2b6-f0129584608d] 2026-02-04 00:03:52.083424 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=8e32975b-6686-472d-8e91-6bb725889307/7fd0bd10-abd8-4e0c-8290-4705cc531d08] 2026-02-04 00:03:52.498082 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-04 00:04:02.507047 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [21s elapsed] 2026-02-04 00:04:03.880443 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 22s [id=6813de93-999d-4c8a-bf26-f188e86f3c80] 2026-02-04 00:04:05.256643 | orchestrator | 2026-02-04 00:04:05.256717 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-04 00:04:05.256726 | orchestrator | 2026-02-04 00:04:05.256733 | orchestrator | Outputs: 2026-02-04 00:04:05.256740 | orchestrator | 2026-02-04 00:04:05.256746 | orchestrator | manager_address = 2026-02-04 00:04:05.256753 | orchestrator | private_key = 2026-02-04 00:04:05.553818 | orchestrator | ok: Runtime: 0:01:54.683319 2026-02-04 00:04:05.591860 | 2026-02-04 00:04:05.592047 | TASK [Create infrastructure (stable)] 2026-02-04 00:04:06.122586 | orchestrator | skipping: Conditional result was False 2026-02-04 00:04:06.139268 | 2026-02-04 00:04:06.139417 | TASK [Fetch manager address] 2026-02-04 00:04:06.599347 | orchestrator | ok 2026-02-04 00:04:06.609210 | 2026-02-04 00:04:06.609324 | TASK [Set manager_host address] 2026-02-04 00:04:06.662658 | orchestrator | ok 2026-02-04 00:04:06.671672 | 2026-02-04 00:04:06.671786 | LOOP [Update ansible collections] 2026-02-04 00:04:08.232891 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-04 00:04:08.233207 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 00:04:08.233262 | orchestrator | Starting galaxy collection install process 2026-02-04 00:04:08.233301 | orchestrator | Process install dependency map 2026-02-04 00:04:08.233337 | orchestrator | Starting collection install process 2026-02-04 00:04:08.233370 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-02-04 00:04:08.233409 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-02-04 00:04:08.233454 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-04 00:04:08.233524 | orchestrator | ok: Item: commons Runtime: 0:00:01.261312 2026-02-04 00:04:09.201112 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 00:04:09.201289 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-04 00:04:09.201450 | orchestrator | Starting galaxy collection install process 2026-02-04 00:04:09.201499 | orchestrator | Process install dependency map 2026-02-04 00:04:09.201537 | orchestrator | Starting collection install process 2026-02-04 00:04:09.201571 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-02-04 00:04:09.201605 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-02-04 00:04:09.201637 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-04 00:04:09.201689 | orchestrator | ok: Item: services Runtime: 0:00:00.656011 2026-02-04 00:04:09.218257 | 2026-02-04 00:04:09.218383 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-04 00:04:19.734515 | orchestrator | ok 2026-02-04 00:04:19.745198 | 2026-02-04 00:04:19.745324 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-04 00:05:19.792607 | orchestrator | ok 2026-02-04 00:05:19.803302 | 2026-02-04 00:05:19.803429 | TASK [Fetch manager ssh hostkey] 2026-02-04 00:05:21.387442 | orchestrator | Output suppressed because no_log was given 2026-02-04 00:05:21.403376 | 2026-02-04 00:05:21.403546 | TASK [Get ssh keypair from terraform environment] 2026-02-04 00:05:21.940789 | orchestrator | ok: Runtime: 0:00:00.007450 2026-02-04 00:05:21.950466 | 2026-02-04 00:05:21.950602 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-04 00:05:21.988344 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-04 00:05:21.999233 | 2026-02-04 00:05:21.999397 | TASK [Run manager part 0] 2026-02-04 00:05:23.117188 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 00:05:23.172449 | orchestrator | 2026-02-04 00:05:23.172505 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-04 00:05:23.172513 | orchestrator | 2026-02-04 00:05:23.172528 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-04 00:05:25.106094 | orchestrator | ok: [testbed-manager] 2026-02-04 00:05:25.106165 | orchestrator | 2026-02-04 00:05:25.106193 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-04 00:05:25.106204 | orchestrator | 2026-02-04 00:05:25.106213 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:05:27.179186 | orchestrator | ok: [testbed-manager] 2026-02-04 00:05:27.179234 | orchestrator | 2026-02-04 00:05:27.179242 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-04 00:05:27.884898 | orchestrator | ok: [testbed-manager] 2026-02-04 00:05:27.885072 | orchestrator | 2026-02-04 00:05:27.885093 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-04 00:05:27.931712 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:27.931761 | orchestrator | 2026-02-04 00:05:27.931774 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-04 00:05:27.960488 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:27.960540 | orchestrator | 2026-02-04 00:05:27.960548 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-04 00:05:27.991599 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:27.991654 | orchestrator | 2026-02-04 00:05:27.991663 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-04 00:05:28.023648 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:28.023692 | orchestrator | 2026-02-04 00:05:28.023698 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-04 00:05:28.057617 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:28.057674 | orchestrator | 2026-02-04 00:05:28.057685 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-04 00:05:28.095445 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:28.095494 | orchestrator | 2026-02-04 00:05:28.095503 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-04 00:05:28.127335 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:28.127391 | orchestrator | 2026-02-04 00:05:28.127401 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-04 00:05:28.865259 | orchestrator | changed: [testbed-manager] 2026-02-04 00:05:28.865306 | orchestrator | 2026-02-04 00:05:28.865312 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-04 00:08:31.730257 | orchestrator | changed: [testbed-manager] 2026-02-04 00:08:31.730328 | orchestrator | 2026-02-04 00:08:31.730345 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-04 00:09:54.728928 | orchestrator | changed: [testbed-manager] 2026-02-04 00:09:54.728991 | orchestrator | 2026-02-04 00:09:54.729001 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-04 00:10:23.124452 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:23.124557 | orchestrator | 2026-02-04 00:10:23.124581 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-04 00:10:34.317161 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:34.317247 | orchestrator | 2026-02-04 00:10:34.317263 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-04 00:10:34.369704 | orchestrator | ok: [testbed-manager] 2026-02-04 00:10:34.369792 | orchestrator | 2026-02-04 00:10:34.369809 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-04 00:10:35.190384 | orchestrator | ok: [testbed-manager] 2026-02-04 00:10:35.190456 | orchestrator | 2026-02-04 00:10:35.190469 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-04 00:10:36.000871 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:36.000951 | orchestrator | 2026-02-04 00:10:36.000967 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-04 00:10:42.713140 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:42.713239 | orchestrator | 2026-02-04 00:10:42.713285 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-04 00:10:48.984487 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:48.984577 | orchestrator | 2026-02-04 00:10:48.984596 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-04 00:10:53.236311 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:53.236816 | orchestrator | 2026-02-04 00:10:53.236836 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-04 00:10:55.123196 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:55.123244 | orchestrator | 2026-02-04 00:10:55.123250 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-04 00:10:56.338618 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-04 00:10:56.338692 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-04 00:10:56.338702 | orchestrator | 2026-02-04 00:10:56.338711 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-04 00:10:56.383041 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-04 00:10:56.383105 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-04 00:10:56.383115 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-04 00:10:56.383123 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-04 00:11:04.486621 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-04 00:11:04.486683 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-04 00:11:04.486691 | orchestrator | 2026-02-04 00:11:04.486699 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-04 00:11:05.079293 | orchestrator | changed: [testbed-manager] 2026-02-04 00:11:05.079401 | orchestrator | 2026-02-04 00:11:05.079429 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-04 00:13:28.057501 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-04 00:13:28.057548 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-04 00:13:28.057557 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-04 00:13:28.057564 | orchestrator | 2026-02-04 00:13:28.057570 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-04 00:13:30.447365 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-04 00:13:30.447401 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-04 00:13:30.447406 | orchestrator | 2026-02-04 00:13:30.447411 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-04 00:13:30.447416 | orchestrator | 2026-02-04 00:13:30.447421 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:13:31.848374 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:31.848416 | orchestrator | 2026-02-04 00:13:31.848426 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-04 00:13:31.894318 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:31.894443 | orchestrator | 2026-02-04 00:13:31.894469 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-04 00:13:31.986731 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:31.986808 | orchestrator | 2026-02-04 00:13:31.986823 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-04 00:13:32.848756 | orchestrator | changed: [testbed-manager] 2026-02-04 00:13:32.848843 | orchestrator | 2026-02-04 00:13:32.848858 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-04 00:13:33.595163 | orchestrator | changed: [testbed-manager] 2026-02-04 00:13:33.595271 | orchestrator | 2026-02-04 00:13:33.595301 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-04 00:13:35.008304 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-04 00:13:35.008392 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-04 00:13:35.008407 | orchestrator | 2026-02-04 00:13:35.008439 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-04 00:13:36.429055 | orchestrator | changed: [testbed-manager] 2026-02-04 00:13:36.429108 | orchestrator | 2026-02-04 00:13:36.429116 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-04 00:13:38.214091 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:13:38.214200 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-04 00:13:38.214221 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:13:38.214233 | orchestrator | 2026-02-04 00:13:38.214246 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-04 00:13:38.264254 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:13:38.264346 | orchestrator | 2026-02-04 00:13:38.264364 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-04 00:13:38.362291 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:13:38.362382 | orchestrator | 2026-02-04 00:13:38.362400 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-04 00:13:38.955234 | orchestrator | changed: [testbed-manager] 2026-02-04 00:13:38.955335 | orchestrator | 2026-02-04 00:13:38.955353 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-04 00:13:39.035874 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:13:39.035913 | orchestrator | 2026-02-04 00:13:39.035920 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-04 00:13:39.894771 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:13:39.894878 | orchestrator | changed: [testbed-manager] 2026-02-04 00:13:39.894906 | orchestrator | 2026-02-04 00:13:39.894923 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-04 00:13:39.927094 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:13:39.927154 | orchestrator | 2026-02-04 00:13:39.927163 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-04 00:13:39.965828 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:13:39.965977 | orchestrator | 2026-02-04 00:13:39.966006 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-04 00:13:40.006898 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:13:40.007034 | orchestrator | 2026-02-04 00:13:40.007065 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-04 00:13:40.086180 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:13:40.086224 | orchestrator | 2026-02-04 00:13:40.086233 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-04 00:13:40.830664 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:40.831349 | orchestrator | 2026-02-04 00:13:40.831364 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-04 00:13:40.831370 | orchestrator | 2026-02-04 00:13:40.831374 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:13:42.223611 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:42.223710 | orchestrator | 2026-02-04 00:13:42.223733 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-04 00:13:43.221346 | orchestrator | changed: [testbed-manager] 2026-02-04 00:13:43.221399 | orchestrator | 2026-02-04 00:13:43.221406 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:13:43.221411 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-04 00:13:43.221416 | orchestrator | 2026-02-04 00:13:43.402527 | orchestrator | ok: Runtime: 0:08:21.001444 2026-02-04 00:13:43.413836 | 2026-02-04 00:13:43.413952 | TASK [Point out that the log in on the manager is now possible] 2026-02-04 00:13:43.456826 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-04 00:13:43.465370 | 2026-02-04 00:13:43.465488 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-04 00:13:43.513549 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-04 00:13:43.523874 | 2026-02-04 00:13:43.524026 | TASK [Run manager part 1 + 2] 2026-02-04 00:13:45.088876 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 00:13:45.148035 | orchestrator | 2026-02-04 00:13:45.148083 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-04 00:13:45.148090 | orchestrator | 2026-02-04 00:13:45.148102 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:13:48.149529 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:48.149593 | orchestrator | 2026-02-04 00:13:48.149621 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-04 00:13:48.186072 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:13:48.186131 | orchestrator | 2026-02-04 00:13:48.186140 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-04 00:13:48.230983 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:48.231090 | orchestrator | 2026-02-04 00:13:48.231119 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-04 00:13:48.278603 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:48.278685 | orchestrator | 2026-02-04 00:13:48.278699 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-04 00:13:48.351206 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:48.351290 | orchestrator | 2026-02-04 00:13:48.351307 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-04 00:13:48.419561 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:48.419620 | orchestrator | 2026-02-04 00:13:48.419628 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-04 00:13:48.478203 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-04 00:13:48.478261 | orchestrator | 2026-02-04 00:13:48.478270 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-04 00:13:49.250968 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:49.251034 | orchestrator | 2026-02-04 00:13:49.251045 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-04 00:13:49.305492 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:13:49.305548 | orchestrator | 2026-02-04 00:13:49.305556 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-04 00:13:50.804653 | orchestrator | changed: [testbed-manager] 2026-02-04 00:13:50.804743 | orchestrator | 2026-02-04 00:13:50.804762 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-04 00:13:51.428048 | orchestrator | ok: [testbed-manager] 2026-02-04 00:13:51.428105 | orchestrator | 2026-02-04 00:13:51.428114 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-04 00:13:52.569322 | orchestrator | changed: [testbed-manager] 2026-02-04 00:13:52.569390 | orchestrator | 2026-02-04 00:13:52.569408 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-04 00:14:08.337913 | orchestrator | changed: [testbed-manager] 2026-02-04 00:14:08.338152 | orchestrator | 2026-02-04 00:14:08.338167 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-04 00:14:09.047972 | orchestrator | ok: [testbed-manager] 2026-02-04 00:14:09.048052 | orchestrator | 2026-02-04 00:14:09.048067 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-04 00:14:09.098510 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:14:09.098593 | orchestrator | 2026-02-04 00:14:09.098606 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-04 00:14:10.123676 | orchestrator | changed: [testbed-manager] 2026-02-04 00:14:10.123742 | orchestrator | 2026-02-04 00:14:10.123752 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-04 00:14:11.129560 | orchestrator | changed: [testbed-manager] 2026-02-04 00:14:11.129688 | orchestrator | 2026-02-04 00:14:11.129711 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-04 00:14:11.725241 | orchestrator | changed: [testbed-manager] 2026-02-04 00:14:11.725287 | orchestrator | 2026-02-04 00:14:11.725296 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-04 00:14:11.767477 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-04 00:14:11.767594 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-04 00:14:11.767611 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-04 00:14:11.767623 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-04 00:14:14.799462 | orchestrator | changed: [testbed-manager] 2026-02-04 00:14:14.799579 | orchestrator | 2026-02-04 00:14:14.799606 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-04 00:14:24.291534 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-04 00:14:24.291585 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-04 00:14:24.291596 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-04 00:14:24.291604 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-04 00:14:24.291614 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-04 00:14:24.291621 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-04 00:14:24.291629 | orchestrator | 2026-02-04 00:14:24.291637 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-04 00:14:25.399994 | orchestrator | changed: [testbed-manager] 2026-02-04 00:14:25.400082 | orchestrator | 2026-02-04 00:14:25.400101 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-04 00:14:25.442508 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:14:25.442575 | orchestrator | 2026-02-04 00:14:25.442584 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-04 00:14:28.724707 | orchestrator | changed: [testbed-manager] 2026-02-04 00:14:28.724803 | orchestrator | 2026-02-04 00:14:28.724821 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-04 00:14:28.763463 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:14:28.763503 | orchestrator | 2026-02-04 00:14:28.763511 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-04 00:16:10.255103 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:10.255143 | orchestrator | 2026-02-04 00:16:10.255150 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-04 00:16:11.536863 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:11.536911 | orchestrator | 2026-02-04 00:16:11.536921 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:16:11.536930 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-04 00:16:11.536939 | orchestrator | 2026-02-04 00:16:12.158180 | orchestrator | ok: Runtime: 0:02:27.814942 2026-02-04 00:16:12.174020 | 2026-02-04 00:16:12.174214 | TASK [Reboot manager] 2026-02-04 00:16:13.712074 | orchestrator | ok: Runtime: 0:00:00.982770 2026-02-04 00:16:13.728902 | 2026-02-04 00:16:13.729059 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-04 00:16:30.149541 | orchestrator | ok 2026-02-04 00:16:30.162165 | 2026-02-04 00:16:30.162295 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-04 00:17:30.212115 | orchestrator | ok 2026-02-04 00:17:30.222765 | 2026-02-04 00:17:30.222936 | TASK [Deploy manager + bootstrap nodes] 2026-02-04 00:17:32.919198 | orchestrator | 2026-02-04 00:17:32.919451 | orchestrator | # DEPLOY MANAGER 2026-02-04 00:17:32.919475 | orchestrator | 2026-02-04 00:17:32.919488 | orchestrator | + set -e 2026-02-04 00:17:32.919499 | orchestrator | + echo 2026-02-04 00:17:32.919511 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-04 00:17:32.919526 | orchestrator | + echo 2026-02-04 00:17:32.919568 | orchestrator | + cat /opt/manager-vars.sh 2026-02-04 00:17:32.921854 | orchestrator | export NUMBER_OF_NODES=6 2026-02-04 00:17:32.921935 | orchestrator | 2026-02-04 00:17:32.921945 | orchestrator | export CEPH_VERSION=reef 2026-02-04 00:17:32.921954 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-04 00:17:32.921963 | orchestrator | export MANAGER_VERSION=latest 2026-02-04 00:17:32.921989 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-04 00:17:32.921998 | orchestrator | 2026-02-04 00:17:32.922048 | orchestrator | export ARA=false 2026-02-04 00:17:32.922058 | orchestrator | export DEPLOY_MODE=manager 2026-02-04 00:17:32.922068 | orchestrator | export TEMPEST=true 2026-02-04 00:17:32.922075 | orchestrator | export IS_ZUUL=true 2026-02-04 00:17:32.922081 | orchestrator | 2026-02-04 00:17:32.922092 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.33 2026-02-04 00:17:32.922098 | orchestrator | export EXTERNAL_API=false 2026-02-04 00:17:32.922104 | orchestrator | 2026-02-04 00:17:32.922110 | orchestrator | export IMAGE_USER=ubuntu 2026-02-04 00:17:32.922119 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-04 00:17:32.922125 | orchestrator | 2026-02-04 00:17:32.922131 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-04 00:17:32.922146 | orchestrator | 2026-02-04 00:17:32.922152 | orchestrator | + echo 2026-02-04 00:17:32.922159 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 00:17:32.923048 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 00:17:32.923061 | orchestrator | ++ INTERACTIVE=false 2026-02-04 00:17:32.923072 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 00:17:32.923080 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 00:17:32.923271 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 00:17:32.923282 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 00:17:32.923298 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 00:17:32.923344 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 00:17:32.923351 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 00:17:32.923358 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 00:17:32.923373 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 00:17:32.923379 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-04 00:17:32.923386 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-04 00:17:32.923392 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 00:17:32.923406 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 00:17:32.923413 | orchestrator | ++ export ARA=false 2026-02-04 00:17:32.923419 | orchestrator | ++ ARA=false 2026-02-04 00:17:32.923425 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 00:17:32.923431 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 00:17:32.923487 | orchestrator | ++ export TEMPEST=true 2026-02-04 00:17:32.923494 | orchestrator | ++ TEMPEST=true 2026-02-04 00:17:32.923500 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 00:17:32.923507 | orchestrator | ++ IS_ZUUL=true 2026-02-04 00:17:32.923513 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.33 2026-02-04 00:17:32.923519 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.33 2026-02-04 00:17:32.923525 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 00:17:32.923532 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 00:17:32.923652 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 00:17:32.923662 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 00:17:32.923669 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 00:17:32.923675 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 00:17:32.923681 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 00:17:32.923688 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 00:17:32.923694 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-04 00:17:32.975712 | orchestrator | + docker version 2026-02-04 00:17:33.257726 | orchestrator | Client: Docker Engine - Community 2026-02-04 00:17:33.257832 | orchestrator | Version: 27.5.1 2026-02-04 00:17:33.257849 | orchestrator | API version: 1.47 2026-02-04 00:17:33.257864 | orchestrator | Go version: go1.22.11 2026-02-04 00:17:33.257876 | orchestrator | Git commit: 9f9e405 2026-02-04 00:17:33.257887 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-04 00:17:33.257899 | orchestrator | OS/Arch: linux/amd64 2026-02-04 00:17:33.257910 | orchestrator | Context: default 2026-02-04 00:17:33.257921 | orchestrator | 2026-02-04 00:17:33.257932 | orchestrator | Server: Docker Engine - Community 2026-02-04 00:17:33.257944 | orchestrator | Engine: 2026-02-04 00:17:33.257955 | orchestrator | Version: 27.5.1 2026-02-04 00:17:33.257967 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-04 00:17:33.258007 | orchestrator | Go version: go1.22.11 2026-02-04 00:17:33.258068 | orchestrator | Git commit: 4c9b3b0 2026-02-04 00:17:33.258080 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-04 00:17:33.258092 | orchestrator | OS/Arch: linux/amd64 2026-02-04 00:17:33.258103 | orchestrator | Experimental: false 2026-02-04 00:17:33.258114 | orchestrator | containerd: 2026-02-04 00:17:33.258125 | orchestrator | Version: v2.2.1 2026-02-04 00:17:33.258137 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-04 00:17:33.258149 | orchestrator | runc: 2026-02-04 00:17:33.258174 | orchestrator | Version: 1.3.4 2026-02-04 00:17:33.258186 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-04 00:17:33.258197 | orchestrator | docker-init: 2026-02-04 00:17:33.258208 | orchestrator | Version: 0.19.0 2026-02-04 00:17:33.258220 | orchestrator | GitCommit: de40ad0 2026-02-04 00:17:33.260009 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-04 00:17:33.267923 | orchestrator | + set -e 2026-02-04 00:17:33.267962 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 00:17:33.267972 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 00:17:33.267982 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 00:17:33.267989 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 00:17:33.267996 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 00:17:33.268004 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 00:17:33.268013 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 00:17:33.268020 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-04 00:17:33.268027 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-04 00:17:33.268035 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 00:17:33.268042 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 00:17:33.268050 | orchestrator | ++ export ARA=false 2026-02-04 00:17:33.268058 | orchestrator | ++ ARA=false 2026-02-04 00:17:33.268065 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 00:17:33.268073 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 00:17:33.268080 | orchestrator | ++ export TEMPEST=true 2026-02-04 00:17:33.268087 | orchestrator | ++ TEMPEST=true 2026-02-04 00:17:33.268094 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 00:17:33.268101 | orchestrator | ++ IS_ZUUL=true 2026-02-04 00:17:33.268115 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.33 2026-02-04 00:17:33.268123 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.33 2026-02-04 00:17:33.268130 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 00:17:33.268137 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 00:17:33.268145 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 00:17:33.268152 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 00:17:33.268162 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 00:17:33.268174 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 00:17:33.268185 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 00:17:33.268223 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 00:17:33.268235 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 00:17:33.268247 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 00:17:33.268258 | orchestrator | ++ INTERACTIVE=false 2026-02-04 00:17:33.268269 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 00:17:33.268285 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 00:17:33.268301 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-04 00:17:33.268314 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-04 00:17:33.268326 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-02-04 00:17:33.276112 | orchestrator | + set -e 2026-02-04 00:17:33.276145 | orchestrator | + VERSION=reef 2026-02-04 00:17:33.277395 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-04 00:17:33.334994 | orchestrator | + [[ -n ceph_version: reef ]] 2026-02-04 00:17:33.335116 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-02-04 00:17:33.342362 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-02-04 00:17:33.351070 | orchestrator | + set -e 2026-02-04 00:17:33.351167 | orchestrator | + VERSION=2024.2 2026-02-04 00:17:33.351195 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-04 00:17:33.355232 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-02-04 00:17:33.355269 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-02-04 00:17:33.361506 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-04 00:17:33.362204 | orchestrator | ++ semver latest 7.0.0 2026-02-04 00:17:33.430597 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 00:17:33.430891 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-04 00:17:33.430932 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-04 00:17:33.431334 | orchestrator | ++ semver latest 10.0.0-0 2026-02-04 00:17:33.484531 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 00:17:33.484756 | orchestrator | ++ semver 2024.2 2025.1 2026-02-04 00:17:33.548266 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 00:17:33.548371 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-04 00:17:33.651312 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 00:17:33.652977 | orchestrator | + source /opt/venv/bin/activate 2026-02-04 00:17:33.654408 | orchestrator | ++ deactivate nondestructive 2026-02-04 00:17:33.654519 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:17:33.654535 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:17:33.654546 | orchestrator | ++ hash -r 2026-02-04 00:17:33.654560 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:17:33.654620 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-04 00:17:33.654661 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-04 00:17:33.654681 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-04 00:17:33.654711 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-04 00:17:33.654729 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-04 00:17:33.654746 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-04 00:17:33.654762 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-04 00:17:33.654779 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 00:17:33.654819 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 00:17:33.654831 | orchestrator | ++ export PATH 2026-02-04 00:17:33.654867 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:17:33.654957 | orchestrator | ++ '[' -z '' ']' 2026-02-04 00:17:33.655023 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-04 00:17:33.655092 | orchestrator | ++ PS1='(venv) ' 2026-02-04 00:17:33.655117 | orchestrator | ++ export PS1 2026-02-04 00:17:33.655131 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-04 00:17:33.655142 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-04 00:17:33.655267 | orchestrator | ++ hash -r 2026-02-04 00:17:33.655566 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-04 00:17:35.080753 | orchestrator | 2026-02-04 00:17:35.080857 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-04 00:17:35.080871 | orchestrator | 2026-02-04 00:17:35.080883 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-04 00:17:35.676686 | orchestrator | ok: [testbed-manager] 2026-02-04 00:17:35.676818 | orchestrator | 2026-02-04 00:17:35.676843 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-04 00:17:36.738365 | orchestrator | changed: [testbed-manager] 2026-02-04 00:17:36.738473 | orchestrator | 2026-02-04 00:17:36.738490 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-04 00:17:36.738503 | orchestrator | 2026-02-04 00:17:36.738514 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:17:40.299624 | orchestrator | ok: [testbed-manager] 2026-02-04 00:17:40.299911 | orchestrator | 2026-02-04 00:17:40.299951 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-04 00:17:40.357404 | orchestrator | ok: [testbed-manager] 2026-02-04 00:17:40.357502 | orchestrator | 2026-02-04 00:17:40.357521 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-04 00:17:40.852111 | orchestrator | changed: [testbed-manager] 2026-02-04 00:17:40.852200 | orchestrator | 2026-02-04 00:17:40.852210 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-04 00:17:40.886741 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:17:40.886838 | orchestrator | 2026-02-04 00:17:40.886852 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-04 00:17:41.249810 | orchestrator | changed: [testbed-manager] 2026-02-04 00:17:41.249915 | orchestrator | 2026-02-04 00:17:41.249932 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-04 00:17:41.606585 | orchestrator | ok: [testbed-manager] 2026-02-04 00:17:41.606741 | orchestrator | 2026-02-04 00:17:41.606760 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-04 00:17:41.753619 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:17:41.753771 | orchestrator | 2026-02-04 00:17:41.753786 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-04 00:17:41.753799 | orchestrator | 2026-02-04 00:17:41.753811 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:17:43.561093 | orchestrator | ok: [testbed-manager] 2026-02-04 00:17:43.561177 | orchestrator | 2026-02-04 00:17:43.561187 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-04 00:17:43.653412 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-04 00:17:43.653504 | orchestrator | 2026-02-04 00:17:43.653518 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-04 00:17:43.712799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-04 00:17:43.712904 | orchestrator | 2026-02-04 00:17:43.712922 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-04 00:17:44.876497 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-04 00:17:44.876601 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-04 00:17:44.876617 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-04 00:17:44.876696 | orchestrator | 2026-02-04 00:17:44.876715 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-04 00:17:46.848254 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-04 00:17:46.848368 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-04 00:17:46.848384 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-04 00:17:46.848397 | orchestrator | 2026-02-04 00:17:46.848410 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-04 00:17:47.540393 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:17:47.540499 | orchestrator | changed: [testbed-manager] 2026-02-04 00:17:47.540516 | orchestrator | 2026-02-04 00:17:47.540529 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-04 00:17:48.190483 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:17:48.190572 | orchestrator | changed: [testbed-manager] 2026-02-04 00:17:48.190585 | orchestrator | 2026-02-04 00:17:48.190595 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-04 00:17:48.259296 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:17:48.259386 | orchestrator | 2026-02-04 00:17:48.259402 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-04 00:17:48.652320 | orchestrator | ok: [testbed-manager] 2026-02-04 00:17:48.652423 | orchestrator | 2026-02-04 00:17:48.652439 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-04 00:17:48.760434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-04 00:17:48.760530 | orchestrator | 2026-02-04 00:17:48.760545 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-04 00:17:49.963513 | orchestrator | changed: [testbed-manager] 2026-02-04 00:17:49.963613 | orchestrator | 2026-02-04 00:17:49.963696 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-04 00:17:50.808060 | orchestrator | changed: [testbed-manager] 2026-02-04 00:17:50.808238 | orchestrator | 2026-02-04 00:17:50.808263 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-04 00:18:01.977996 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:01.978156 | orchestrator | 2026-02-04 00:18:01.978200 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-04 00:18:02.048374 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:18:02.048466 | orchestrator | 2026-02-04 00:18:02.048481 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-04 00:18:02.048494 | orchestrator | 2026-02-04 00:18:02.048506 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:18:04.043174 | orchestrator | ok: [testbed-manager] 2026-02-04 00:18:04.043271 | orchestrator | 2026-02-04 00:18:04.043310 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-04 00:18:04.166788 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-04 00:18:04.166884 | orchestrator | 2026-02-04 00:18:04.166900 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-04 00:18:04.236292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:18:04.236380 | orchestrator | 2026-02-04 00:18:04.236393 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-04 00:18:07.038472 | orchestrator | ok: [testbed-manager] 2026-02-04 00:18:07.038598 | orchestrator | 2026-02-04 00:18:07.038678 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-04 00:18:07.087552 | orchestrator | ok: [testbed-manager] 2026-02-04 00:18:07.087664 | orchestrator | 2026-02-04 00:18:07.087677 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-04 00:18:07.221474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-04 00:18:07.221570 | orchestrator | 2026-02-04 00:18:07.221587 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-04 00:18:10.231407 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-04 00:18:10.231509 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-04 00:18:10.231525 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-04 00:18:10.231538 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-04 00:18:10.231549 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-04 00:18:10.231560 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-04 00:18:10.231571 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-04 00:18:10.231582 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-04 00:18:10.231593 | orchestrator | 2026-02-04 00:18:10.231655 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-04 00:18:10.911565 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:10.911787 | orchestrator | 2026-02-04 00:18:10.911817 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-04 00:18:11.639709 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:11.639816 | orchestrator | 2026-02-04 00:18:11.639833 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-04 00:18:11.718462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-04 00:18:11.718572 | orchestrator | 2026-02-04 00:18:11.718590 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-04 00:18:13.021431 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-04 00:18:13.021536 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-04 00:18:13.021551 | orchestrator | 2026-02-04 00:18:13.021564 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-04 00:18:13.735962 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:13.736073 | orchestrator | 2026-02-04 00:18:13.736104 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-04 00:18:13.799758 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:18:13.799849 | orchestrator | 2026-02-04 00:18:13.799864 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-04 00:18:13.880085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-04 00:18:13.880189 | orchestrator | 2026-02-04 00:18:13.880205 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-04 00:18:14.543151 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:14.543251 | orchestrator | 2026-02-04 00:18:14.543269 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-04 00:18:14.603781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-04 00:18:14.603910 | orchestrator | 2026-02-04 00:18:14.603927 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-04 00:18:16.059502 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:18:16.059646 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:18:16.059665 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:16.059680 | orchestrator | 2026-02-04 00:18:16.059693 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-04 00:18:16.671040 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:16.671138 | orchestrator | 2026-02-04 00:18:16.671155 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-04 00:18:16.740958 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:18:16.741052 | orchestrator | 2026-02-04 00:18:16.741066 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-04 00:18:16.844457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-04 00:18:16.844543 | orchestrator | 2026-02-04 00:18:16.844556 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-04 00:18:17.390443 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:17.390535 | orchestrator | 2026-02-04 00:18:17.390572 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-04 00:18:17.825117 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:17.825214 | orchestrator | 2026-02-04 00:18:17.825233 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-04 00:18:19.129914 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-04 00:18:19.130121 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-04 00:18:19.130146 | orchestrator | 2026-02-04 00:18:19.130161 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-04 00:18:19.809345 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:19.809447 | orchestrator | 2026-02-04 00:18:19.809463 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-04 00:18:20.224543 | orchestrator | ok: [testbed-manager] 2026-02-04 00:18:20.224710 | orchestrator | 2026-02-04 00:18:20.224731 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-04 00:18:20.604639 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:20.604734 | orchestrator | 2026-02-04 00:18:20.604751 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-04 00:18:20.655104 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:18:20.655189 | orchestrator | 2026-02-04 00:18:20.655204 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-04 00:18:20.733314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-04 00:18:20.733404 | orchestrator | 2026-02-04 00:18:20.733421 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-04 00:18:20.773216 | orchestrator | ok: [testbed-manager] 2026-02-04 00:18:20.773307 | orchestrator | 2026-02-04 00:18:20.773323 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-04 00:18:22.922808 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-04 00:18:22.922939 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-04 00:18:22.922968 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-04 00:18:22.922987 | orchestrator | 2026-02-04 00:18:22.923009 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-04 00:18:23.691256 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:23.691332 | orchestrator | 2026-02-04 00:18:23.691350 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-04 00:18:24.446281 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:24.446406 | orchestrator | 2026-02-04 00:18:24.446436 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-04 00:18:25.212906 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:25.213009 | orchestrator | 2026-02-04 00:18:25.213029 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-04 00:18:25.286912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-04 00:18:25.287003 | orchestrator | 2026-02-04 00:18:25.287017 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-04 00:18:25.346319 | orchestrator | ok: [testbed-manager] 2026-02-04 00:18:25.346425 | orchestrator | 2026-02-04 00:18:25.346442 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-04 00:18:26.086762 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-04 00:18:26.086883 | orchestrator | 2026-02-04 00:18:26.086900 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-04 00:18:26.179294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-04 00:18:26.179360 | orchestrator | 2026-02-04 00:18:26.179367 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-04 00:18:26.919824 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:26.919917 | orchestrator | 2026-02-04 00:18:26.919934 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-04 00:18:27.570369 | orchestrator | ok: [testbed-manager] 2026-02-04 00:18:27.570490 | orchestrator | 2026-02-04 00:18:27.570508 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-04 00:18:27.636115 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:18:27.636204 | orchestrator | 2026-02-04 00:18:27.636218 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-04 00:18:27.699890 | orchestrator | ok: [testbed-manager] 2026-02-04 00:18:27.699976 | orchestrator | 2026-02-04 00:18:27.699986 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-04 00:18:28.570416 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:28.570517 | orchestrator | 2026-02-04 00:18:28.570535 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-04 00:19:43.412763 | orchestrator | changed: [testbed-manager] 2026-02-04 00:19:43.412878 | orchestrator | 2026-02-04 00:19:43.412896 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-04 00:19:44.436384 | orchestrator | ok: [testbed-manager] 2026-02-04 00:19:44.436481 | orchestrator | 2026-02-04 00:19:44.436498 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-04 00:19:44.494115 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:19:44.494207 | orchestrator | 2026-02-04 00:19:44.494222 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-04 00:19:50.710489 | orchestrator | changed: [testbed-manager] 2026-02-04 00:19:50.710689 | orchestrator | 2026-02-04 00:19:50.710716 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-04 00:19:50.818761 | orchestrator | ok: [testbed-manager] 2026-02-04 00:19:50.818840 | orchestrator | 2026-02-04 00:19:50.818871 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-04 00:19:50.818880 | orchestrator | 2026-02-04 00:19:50.818887 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-04 00:19:50.870303 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:19:50.870388 | orchestrator | 2026-02-04 00:19:50.870406 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-04 00:20:50.922435 | orchestrator | Pausing for 60 seconds 2026-02-04 00:20:50.922634 | orchestrator | changed: [testbed-manager] 2026-02-04 00:20:50.922655 | orchestrator | 2026-02-04 00:20:50.922668 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-04 00:20:54.523631 | orchestrator | changed: [testbed-manager] 2026-02-04 00:20:54.523740 | orchestrator | 2026-02-04 00:20:54.523757 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-04 00:21:56.841558 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-04 00:21:56.841665 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-04 00:21:56.841678 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-04 00:21:56.841715 | orchestrator | changed: [testbed-manager] 2026-02-04 00:21:56.841726 | orchestrator | 2026-02-04 00:21:56.841736 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-04 00:22:08.100455 | orchestrator | changed: [testbed-manager] 2026-02-04 00:22:08.100538 | orchestrator | 2026-02-04 00:22:08.100547 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-04 00:22:08.185276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-04 00:22:08.185367 | orchestrator | 2026-02-04 00:22:08.185378 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-04 00:22:08.185385 | orchestrator | 2026-02-04 00:22:08.185392 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-04 00:22:08.242323 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:22:08.242418 | orchestrator | 2026-02-04 00:22:08.242465 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-04 00:22:08.317573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-04 00:22:08.317674 | orchestrator | 2026-02-04 00:22:08.317691 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-04 00:22:09.117800 | orchestrator | changed: [testbed-manager] 2026-02-04 00:22:09.117929 | orchestrator | 2026-02-04 00:22:09.117946 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-04 00:22:12.628749 | orchestrator | ok: [testbed-manager] 2026-02-04 00:22:12.628901 | orchestrator | 2026-02-04 00:22:12.628920 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-04 00:22:12.706858 | orchestrator | ok: [testbed-manager] => { 2026-02-04 00:22:12.706961 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-04 00:22:12.706986 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-04 00:22:12.707009 | orchestrator | "Checking running containers against expected versions...", 2026-02-04 00:22:12.707029 | orchestrator | "", 2026-02-04 00:22:12.707050 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-04 00:22:12.707068 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-04 00:22:12.707088 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.707106 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-04 00:22:12.707118 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.707129 | orchestrator | "", 2026-02-04 00:22:12.707140 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-04 00:22:12.707152 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-02-04 00:22:12.707163 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.707174 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-02-04 00:22:12.707186 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.707197 | orchestrator | "", 2026-02-04 00:22:12.707208 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-04 00:22:12.707219 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-04 00:22:12.707229 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.707240 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-04 00:22:12.707251 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.707262 | orchestrator | "", 2026-02-04 00:22:12.707273 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-04 00:22:12.707285 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-04 00:22:12.707296 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.707306 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-04 00:22:12.707317 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.707328 | orchestrator | "", 2026-02-04 00:22:12.707343 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-04 00:22:12.707399 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-02-04 00:22:12.707421 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.707473 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-02-04 00:22:12.707488 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.707501 | orchestrator | "", 2026-02-04 00:22:12.707513 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-04 00:22:12.707527 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.707541 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.707553 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.707567 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.707580 | orchestrator | "", 2026-02-04 00:22:12.707593 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-04 00:22:12.707607 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-04 00:22:12.707620 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.707633 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-04 00:22:12.707646 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.707658 | orchestrator | "", 2026-02-04 00:22:12.707670 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-04 00:22:12.707683 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-04 00:22:12.707696 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.707715 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-04 00:22:12.707747 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.707767 | orchestrator | "", 2026-02-04 00:22:12.707786 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-04 00:22:12.707811 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-02-04 00:22:12.707830 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.707842 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-02-04 00:22:12.707853 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.707864 | orchestrator | "", 2026-02-04 00:22:12.707875 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-04 00:22:12.707886 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-04 00:22:12.707897 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.707908 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-04 00:22:12.707918 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.707929 | orchestrator | "", 2026-02-04 00:22:12.707940 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-04 00:22:12.708024 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.708036 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.708047 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.708058 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.708070 | orchestrator | "", 2026-02-04 00:22:12.708089 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-04 00:22:12.708109 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.708127 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.708147 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.708166 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.708186 | orchestrator | "", 2026-02-04 00:22:12.708201 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-04 00:22:12.708212 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.708223 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.708234 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.708245 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.708255 | orchestrator | "", 2026-02-04 00:22:12.708266 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-04 00:22:12.708277 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.708288 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.708311 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.708322 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.708333 | orchestrator | "", 2026-02-04 00:22:12.708344 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-04 00:22:12.708375 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.708386 | orchestrator | " Enabled: true", 2026-02-04 00:22:12.708397 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-04 00:22:12.708408 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:22:12.708419 | orchestrator | "", 2026-02-04 00:22:12.708460 | orchestrator | "=== Summary ===", 2026-02-04 00:22:12.708480 | orchestrator | "Errors (version mismatches): 0", 2026-02-04 00:22:12.708498 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-04 00:22:12.708517 | orchestrator | "", 2026-02-04 00:22:12.708535 | orchestrator | "✅ All running containers match expected versions!" 2026-02-04 00:22:12.708555 | orchestrator | ] 2026-02-04 00:22:12.708571 | orchestrator | } 2026-02-04 00:22:12.708582 | orchestrator | 2026-02-04 00:22:12.708593 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-04 00:22:12.772810 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:22:12.772908 | orchestrator | 2026-02-04 00:22:12.772924 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:22:12.772937 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-04 00:22:12.772949 | orchestrator | 2026-02-04 00:22:12.890737 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 00:22:12.890840 | orchestrator | + deactivate 2026-02-04 00:22:12.890859 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-04 00:22:12.890873 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 00:22:12.890885 | orchestrator | + export PATH 2026-02-04 00:22:12.890896 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-04 00:22:12.890908 | orchestrator | + '[' -n '' ']' 2026-02-04 00:22:12.890920 | orchestrator | + hash -r 2026-02-04 00:22:12.890931 | orchestrator | + '[' -n '' ']' 2026-02-04 00:22:12.890942 | orchestrator | + unset VIRTUAL_ENV 2026-02-04 00:22:12.890953 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-04 00:22:12.890965 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-04 00:22:12.890976 | orchestrator | + unset -f deactivate 2026-02-04 00:22:12.890987 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-04 00:22:12.900400 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-04 00:22:12.900465 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-04 00:22:12.900478 | orchestrator | + local max_attempts=60 2026-02-04 00:22:12.900490 | orchestrator | + local name=ceph-ansible 2026-02-04 00:22:12.900501 | orchestrator | + local attempt_num=1 2026-02-04 00:22:12.901660 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:22:12.943077 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:22:12.943162 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-04 00:22:12.943175 | orchestrator | + local max_attempts=60 2026-02-04 00:22:12.943188 | orchestrator | + local name=kolla-ansible 2026-02-04 00:22:12.943199 | orchestrator | + local attempt_num=1 2026-02-04 00:22:12.943871 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-04 00:22:12.978844 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:22:12.978935 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-04 00:22:12.978952 | orchestrator | + local max_attempts=60 2026-02-04 00:22:12.978964 | orchestrator | + local name=osism-ansible 2026-02-04 00:22:12.978976 | orchestrator | + local attempt_num=1 2026-02-04 00:22:12.979482 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-04 00:22:13.018697 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:22:13.018781 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-04 00:22:13.018794 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-04 00:22:13.785344 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-04 00:22:13.988706 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-04 00:22:13.988833 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-04 00:22:13.988849 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-04 00:22:13.988861 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-04 00:22:13.988874 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-04 00:22:13.988885 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-04 00:22:13.988896 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-04 00:22:13.988916 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-04 00:22:13.988945 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-04 00:22:13.988970 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-04 00:22:13.988981 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-04 00:22:13.988991 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-04 00:22:13.989002 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-04 00:22:13.989013 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-04 00:22:13.989115 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-04 00:22:13.989131 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-04 00:22:13.996624 | orchestrator | ++ semver latest 7.0.0 2026-02-04 00:22:14.056214 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 00:22:14.056293 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-04 00:22:14.056301 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-04 00:22:14.060668 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-04 00:22:26.369588 | orchestrator | 2026-02-04 00:22:26 | INFO  | Prepare task for execution of resolvconf. 2026-02-04 00:22:26.596831 | orchestrator | 2026-02-04 00:22:26 | INFO  | Task 30de669c-f80a-4a97-a8a8-a0d18eec7ce2 (resolvconf) was prepared for execution. 2026-02-04 00:22:26.596927 | orchestrator | 2026-02-04 00:22:26 | INFO  | It takes a moment until task 30de669c-f80a-4a97-a8a8-a0d18eec7ce2 (resolvconf) has been started and output is visible here. 2026-02-04 00:22:41.224636 | orchestrator | 2026-02-04 00:22:41.224747 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-04 00:22:41.224765 | orchestrator | 2026-02-04 00:22:41.224778 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:22:41.224789 | orchestrator | Wednesday 04 February 2026 00:22:30 +0000 (0:00:00.149) 0:00:00.149 **** 2026-02-04 00:22:41.224801 | orchestrator | ok: [testbed-manager] 2026-02-04 00:22:41.224813 | orchestrator | 2026-02-04 00:22:41.224824 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-04 00:22:41.224836 | orchestrator | Wednesday 04 February 2026 00:22:34 +0000 (0:00:03.906) 0:00:04.056 **** 2026-02-04 00:22:41.224847 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:22:41.224859 | orchestrator | 2026-02-04 00:22:41.224870 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-04 00:22:41.224881 | orchestrator | Wednesday 04 February 2026 00:22:34 +0000 (0:00:00.068) 0:00:04.124 **** 2026-02-04 00:22:41.224892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-04 00:22:41.224904 | orchestrator | 2026-02-04 00:22:41.224917 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-04 00:22:41.224936 | orchestrator | Wednesday 04 February 2026 00:22:34 +0000 (0:00:00.102) 0:00:04.227 **** 2026-02-04 00:22:41.224954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:22:41.224972 | orchestrator | 2026-02-04 00:22:41.225018 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-04 00:22:41.225039 | orchestrator | Wednesday 04 February 2026 00:22:35 +0000 (0:00:00.093) 0:00:04.320 **** 2026-02-04 00:22:41.225056 | orchestrator | ok: [testbed-manager] 2026-02-04 00:22:41.225074 | orchestrator | 2026-02-04 00:22:41.225092 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-04 00:22:41.225109 | orchestrator | Wednesday 04 February 2026 00:22:36 +0000 (0:00:01.223) 0:00:05.544 **** 2026-02-04 00:22:41.225127 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:22:41.225145 | orchestrator | 2026-02-04 00:22:41.225166 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-04 00:22:41.225186 | orchestrator | Wednesday 04 February 2026 00:22:36 +0000 (0:00:00.064) 0:00:05.608 **** 2026-02-04 00:22:41.225203 | orchestrator | ok: [testbed-manager] 2026-02-04 00:22:41.225216 | orchestrator | 2026-02-04 00:22:41.225230 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-04 00:22:41.225243 | orchestrator | Wednesday 04 February 2026 00:22:36 +0000 (0:00:00.527) 0:00:06.135 **** 2026-02-04 00:22:41.225254 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:22:41.225265 | orchestrator | 2026-02-04 00:22:41.225276 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-04 00:22:41.225289 | orchestrator | Wednesday 04 February 2026 00:22:36 +0000 (0:00:00.081) 0:00:06.217 **** 2026-02-04 00:22:41.225299 | orchestrator | changed: [testbed-manager] 2026-02-04 00:22:41.225310 | orchestrator | 2026-02-04 00:22:41.225321 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-04 00:22:41.225332 | orchestrator | Wednesday 04 February 2026 00:22:37 +0000 (0:00:00.583) 0:00:06.800 **** 2026-02-04 00:22:41.225343 | orchestrator | changed: [testbed-manager] 2026-02-04 00:22:41.225354 | orchestrator | 2026-02-04 00:22:41.225364 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-04 00:22:41.225375 | orchestrator | Wednesday 04 February 2026 00:22:38 +0000 (0:00:01.121) 0:00:07.922 **** 2026-02-04 00:22:41.225386 | orchestrator | ok: [testbed-manager] 2026-02-04 00:22:41.225442 | orchestrator | 2026-02-04 00:22:41.225456 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-04 00:22:41.225467 | orchestrator | Wednesday 04 February 2026 00:22:39 +0000 (0:00:01.011) 0:00:08.933 **** 2026-02-04 00:22:41.225478 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-04 00:22:41.225489 | orchestrator | 2026-02-04 00:22:41.225500 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-04 00:22:41.225510 | orchestrator | Wednesday 04 February 2026 00:22:39 +0000 (0:00:00.078) 0:00:09.011 **** 2026-02-04 00:22:41.225521 | orchestrator | changed: [testbed-manager] 2026-02-04 00:22:41.225532 | orchestrator | 2026-02-04 00:22:41.225544 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:22:41.225556 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:22:41.225567 | orchestrator | 2026-02-04 00:22:41.225578 | orchestrator | 2026-02-04 00:22:41.225588 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:22:41.225599 | orchestrator | Wednesday 04 February 2026 00:22:40 +0000 (0:00:01.198) 0:00:10.210 **** 2026-02-04 00:22:41.225610 | orchestrator | =============================================================================== 2026-02-04 00:22:41.225620 | orchestrator | Gathering Facts --------------------------------------------------------- 3.91s 2026-02-04 00:22:41.225631 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.22s 2026-02-04 00:22:41.225642 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.20s 2026-02-04 00:22:41.225652 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.12s 2026-02-04 00:22:41.225663 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.01s 2026-02-04 00:22:41.225673 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2026-02-04 00:22:41.225705 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-02-04 00:22:41.225716 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2026-02-04 00:22:41.225727 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-02-04 00:22:41.225738 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-04 00:22:41.225748 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-02-04 00:22:41.225759 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-04 00:22:41.225770 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-02-04 00:22:41.540560 | orchestrator | + osism apply sshconfig 2026-02-04 00:22:53.695959 | orchestrator | 2026-02-04 00:22:53 | INFO  | Prepare task for execution of sshconfig. 2026-02-04 00:22:53.764949 | orchestrator | 2026-02-04 00:22:53 | INFO  | Task f1d4192c-8f44-409c-9dd6-9efa579ae478 (sshconfig) was prepared for execution. 2026-02-04 00:22:53.765033 | orchestrator | 2026-02-04 00:22:53 | INFO  | It takes a moment until task f1d4192c-8f44-409c-9dd6-9efa579ae478 (sshconfig) has been started and output is visible here. 2026-02-04 00:23:06.119376 | orchestrator | 2026-02-04 00:23:06.119533 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-04 00:23:06.119550 | orchestrator | 2026-02-04 00:23:06.119561 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-04 00:23:06.119572 | orchestrator | Wednesday 04 February 2026 00:22:58 +0000 (0:00:00.175) 0:00:00.175 **** 2026-02-04 00:23:06.119582 | orchestrator | ok: [testbed-manager] 2026-02-04 00:23:06.119607 | orchestrator | 2026-02-04 00:23:06.119618 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-04 00:23:06.119656 | orchestrator | Wednesday 04 February 2026 00:22:58 +0000 (0:00:00.556) 0:00:00.731 **** 2026-02-04 00:23:06.119666 | orchestrator | changed: [testbed-manager] 2026-02-04 00:23:06.119677 | orchestrator | 2026-02-04 00:23:06.119687 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-04 00:23:06.119696 | orchestrator | Wednesday 04 February 2026 00:22:59 +0000 (0:00:00.554) 0:00:01.286 **** 2026-02-04 00:23:06.119706 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-04 00:23:06.119716 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-04 00:23:06.119725 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-04 00:23:06.119735 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-04 00:23:06.119744 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-04 00:23:06.119754 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-04 00:23:06.119763 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-04 00:23:06.119773 | orchestrator | 2026-02-04 00:23:06.119783 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-04 00:23:06.119792 | orchestrator | Wednesday 04 February 2026 00:23:05 +0000 (0:00:05.945) 0:00:07.231 **** 2026-02-04 00:23:06.119802 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:23:06.119811 | orchestrator | 2026-02-04 00:23:06.119821 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-04 00:23:06.119831 | orchestrator | Wednesday 04 February 2026 00:23:05 +0000 (0:00:00.085) 0:00:07.316 **** 2026-02-04 00:23:06.119840 | orchestrator | changed: [testbed-manager] 2026-02-04 00:23:06.119850 | orchestrator | 2026-02-04 00:23:06.119860 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:23:06.119871 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:23:06.119881 | orchestrator | 2026-02-04 00:23:06.119890 | orchestrator | 2026-02-04 00:23:06.119901 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:23:06.119913 | orchestrator | Wednesday 04 February 2026 00:23:05 +0000 (0:00:00.577) 0:00:07.894 **** 2026-02-04 00:23:06.119926 | orchestrator | =============================================================================== 2026-02-04 00:23:06.119937 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.95s 2026-02-04 00:23:06.119949 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2026-02-04 00:23:06.119960 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2026-02-04 00:23:06.119972 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2026-02-04 00:23:06.119983 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-02-04 00:23:06.464869 | orchestrator | + osism apply known-hosts 2026-02-04 00:23:18.550495 | orchestrator | 2026-02-04 00:23:18 | INFO  | Prepare task for execution of known-hosts. 2026-02-04 00:23:18.634922 | orchestrator | 2026-02-04 00:23:18 | INFO  | Task 5e628a31-e020-45ae-a43c-5e50491f9f46 (known-hosts) was prepared for execution. 2026-02-04 00:23:18.635017 | orchestrator | 2026-02-04 00:23:18 | INFO  | It takes a moment until task 5e628a31-e020-45ae-a43c-5e50491f9f46 (known-hosts) has been started and output is visible here. 2026-02-04 00:23:35.670844 | orchestrator | 2026-02-04 00:23:35.670960 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-04 00:23:35.670979 | orchestrator | 2026-02-04 00:23:35.670991 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-04 00:23:35.671004 | orchestrator | Wednesday 04 February 2026 00:23:23 +0000 (0:00:00.202) 0:00:00.202 **** 2026-02-04 00:23:35.671016 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-04 00:23:35.671028 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-04 00:23:35.671062 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-04 00:23:35.671074 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-04 00:23:35.671085 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-04 00:23:35.671096 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-04 00:23:35.671106 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-04 00:23:35.671117 | orchestrator | 2026-02-04 00:23:35.671129 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-04 00:23:35.671141 | orchestrator | Wednesday 04 February 2026 00:23:29 +0000 (0:00:06.308) 0:00:06.511 **** 2026-02-04 00:23:35.671165 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-04 00:23:35.671179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-04 00:23:35.671190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-04 00:23:35.671201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-04 00:23:35.671212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-04 00:23:35.671223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-04 00:23:35.671234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-04 00:23:35.671245 | orchestrator | 2026-02-04 00:23:35.671264 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:35.671283 | orchestrator | Wednesday 04 February 2026 00:23:29 +0000 (0:00:00.170) 0:00:06.682 **** 2026-02-04 00:23:35.671300 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEquUKFJAb6b9uBJ6eug5Wq5nuN6kET6a8ny/jLNDCyi+CZIyjV5RMNGC0Norp/BjpyW8GuHhUz5lT2BAHf0PHY=) 2026-02-04 00:23:35.671325 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDd3EwnsAUt/odzIW0tNcx0EwIi2c6G2mRJYl4+wT0A0kZUoZ9VqPPTdJ5e+kUUYX7eNybUSfC8sqlyobiSj+W5XC3KWO+SE+sZc8yu1KKSGDFIFHnInpJY7T/2VxhiL+Dq2wB8rWwPGs8DWnuCSUfZqNuJwRb7RHuTOy0/wGZ0vBXbRstHGHHrA4giRnDtqlnE61e1oH876L94ABmohrzeEPi8GhSgDjIov6TAcCMg5VCs+KxKbYHIKYJQ7FeEfCObVf+1AtkxRjSq7NMqyHyFwPxYpWbBLL/19p5JsliZBYHKLFqcfiptqldSm8BHbBPxdOEmsljAq3AbTkjqzTIXKUmyHxTn9ePUYPTKJP7/sXfo7lmHsLV1wdgYRp8VLmtQJ8Yl9pCaptVaOYg7mG2NFW8/WuZkFiW45bnbkXsXl3J5x/f3lr22c0P8/2GInD0Qge4PgRsJ0EzIbtTjHJaZrOd12xrliiEDQBdeDLGCUiliVHBwXS2QI04A0bnR+j8=) 2026-02-04 00:23:35.671350 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICayi6AbzAZRLij2vWDD5AgzW0PPkMM65tI4wv/tk/Vg) 2026-02-04 00:23:35.671425 | orchestrator | 2026-02-04 00:23:35.671448 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:35.671469 | orchestrator | Wednesday 04 February 2026 00:23:30 +0000 (0:00:01.200) 0:00:07.883 **** 2026-02-04 00:23:35.671488 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO2svct6n7ersVrqVok6Iy8xTNx0p9HG7sOSaCGkp3CQNy7YK7N93jQ1X2xAzHwLVlxHAj5eLD0Iiw+7/Q5MJ+g=) 2026-02-04 00:23:35.671579 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAKuM2viW+aSKDRhfPwK9r0AXzIaagTow3w5iqH8ohNGIDQ3syX2qJhCp+1PZVp75wQSRu9Iu2e3XJ1WOT7Kj1XmtgwMkt9/J+WkDSSIhFtEdsnCKp/cNKdVpBvX25P1JNp3XWF65lcviQ4XqKDzL3H2Dzps6eVfs9l9syZfR14fUX4JxbDMOgaxGblwkqqo23b1NyJ7XiRKWsSzjKqyJiyqj7EE8QvYkUl2PA4NJGifIbiTmkjQ+63NmtJdLSmF8vGVMbO3BILGw2ZO12ijONv5sQrdWag0SaQmNLR+MwHcOdU+49GYEtagJE0BY+5szQiGBMpfSPpwlFtl9e9MiEHGdA1iDY9jCkFfX8vndhKRH1QTm5J46Ydm4Z9u8CqtNzKF/vgVBRr/2XWYOob0cekLjDBPFBFadxvq8dqZq+QoGj028dky+h+XcPL0XE664MEdcAqvn+J+JjkY4F5Ynqxkqcr1VR9Y5g1eMH7o5brHH+LqUjq6Q4zLCOIj+tbLU=) 2026-02-04 00:23:35.671597 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpz//6IpRHELtgyNAg+jWHmoz+5exjXz//PpJlrZ1qe) 2026-02-04 00:23:35.671610 | orchestrator | 2026-02-04 00:23:35.671623 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:35.671636 | orchestrator | Wednesday 04 February 2026 00:23:31 +0000 (0:00:01.147) 0:00:09.031 **** 2026-02-04 00:23:35.671650 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDy+3wPdok5Kbvfq16r6KJI/EQj0iKmrIkkvYDl2v0N1U2g1cREAkz6Hu3PgUGAXsZmqws1RJHCM3C4nse52dsCtAngMW6E7s+Uv1JWHuusvNF7AGwtZkqL8gUyJIWk1jRfzSTwklnJd6sP49WNdcVohMar8Fl6te7+GCZdkVjEEg3SGdkjqNC0jfHFZPOiCA6De+h7X+TspR9a9niajP7o5qgfP2cn7W4STttSBTZo4Ne/qy2x+MD4pLuDEz1a47AF9avC2aSqlVHd/rrbfFDAseJD8g/LsW+qhMQk53K/8Lh2+2H+DOQTQVT+sy/v63TIv4x0YNHqDsOxPWKeC9FJe9yjkVRvVPPkG411aOiXuMNRAuXLDuik1DC6Ul8eGzTGtpYelfu+ECXAXXd2DeTZLglBEEUsWYqPsNXIzfrKEhQ/xyR2nlycz74J1L+eDTcX4ah2awmhRX26PhCsPBFD2wD9qsLvHmGyN5jJynXNCkazksubpGt5U44OoSO02WE=) 2026-02-04 00:23:35.671664 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCL9nKZ5qwbXyz9oAz0pjy2F/g/KIa4T2bJG+ZRBAGNElxBMgGnwzqb+SsQaGDa2fx/EVOJDWmALKqtWx9YfhxE=) 2026-02-04 00:23:35.671750 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID6UX19dOpsHeko034x1UkSovWVI4Ul8wlxwISOf/Ocu) 2026-02-04 00:23:35.671762 | orchestrator | 2026-02-04 00:23:35.671774 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:35.671784 | orchestrator | Wednesday 04 February 2026 00:23:33 +0000 (0:00:01.109) 0:00:10.140 **** 2026-02-04 00:23:35.671800 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAsYWXaCrV+k9B74LMljwqVafRJ5Qk10aqkU8x9ubNEqNNmPSaq9uOYjEx2tqXKr5i4YZbv8cJO8Fl/ttLjFWJM=) 2026-02-04 00:23:35.671812 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+7wrs2y7Asjrdy8wXQu4l3dIyr6z6n0txArDj+EF9ZDt0MItnVhAmiCLJU9m25/cIQyxzBPjqcFGN6lmJ2EWxgcYQIsURhQCoXbzCLC+5tgw0rc8oyWGjbScO6ZA77dfXfrBQDmES6BEddclCS98z2ML6vW1WeAW1rQDjpX3wLmFFgy2l3iyvaib/a7IrVMPbCikpR/jiZ6RQavIrgZwSvjzUVKzpSw7iNlQLvAea1hbYHOk5gYndTF/Xm7tcLnCRFF5ItNfRTqISCJkU2bZ+14rZ4/ge51f4Bl1aEG52v7Kwh41X18Bd9Z4rqc+Eqhtq6UYGilFRl6IFfNQFypnC/DUwp7CEqfIctPQ7n89EoF3A/Uij6d+0jRzHfaTIxX6Qd8I0mTJC1vNVB9E9/6nBNym5ogZUd6+bCQoytuQ+YxCG0zyhpK883dguXICKsNmCwDglQ2zoPdLlURNGMcKHihO2ws7YFqg4h5/n9OyuHZpkW5Ba2zPZeyXaRtsT7Gc=) 2026-02-04 00:23:35.671824 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIX4cZNTzGIonEfK4F+EeCb2yQuBekZzncERsdss+0u2) 2026-02-04 00:23:35.671835 | orchestrator | 2026-02-04 00:23:35.671846 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:35.671856 | orchestrator | Wednesday 04 February 2026 00:23:34 +0000 (0:00:01.069) 0:00:11.210 **** 2026-02-04 00:23:35.671868 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7zoBNi62miKI7pEIxqsqj4lc3vyDfh+DENEV2y9DMQU6tlWX9/EBLzIE9mvgAMTSp1fNtah3dBR9vpYk5Iu///wxQ5WD6Mfo5HPaxaungtysK40h4uw/7JgNm3pdpsaTsPz0DVBhpJ8m5vev7bzlZ2ztUHwIuDx225UvH2EfwiDYsl6udQN3VlLV4rYIQUYjFHnsjr9uB9aw/jm4ktD5bLFst/qmTlsLP9o/UnuOIExoqMl7iteVNfSv3rCwZIrEupTNqY11FnsvKHg0GsB/gGEZtAMonZpztClow6sucirWBtMAt6bf+U4rmooizuhHyVA5WDTwFZAxNQCo3SbevpgHOPADGk4OZJyCuiptEgtCGSTw6Q7X9GILfqk/X6wi1cRD5lr/6JmLFfPLGuZDyKx9EWch1e+0FyXVeyOxXPFuWx+x4qWGIvgnRI1nIlFpecgDI30mHLp5UKThyo7BqFjyNCc33STNi5z03himLY8/maKenKH0iKSRecggz1vs=) 2026-02-04 00:23:35.671886 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEixqV5H0RvAOQfMQn0zRlyDOO0lxRzuwEYGoChx/Vy+BQZgDapCqPpToDEzDqQ4LFSaAC2qZM+XciF63CfovOk=) 2026-02-04 00:23:35.671897 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPvCV9z3lwlRPPHzKr58mgCmsKem4OnI6AP0+O5MtFi1) 2026-02-04 00:23:35.671908 | orchestrator | 2026-02-04 00:23:35.671919 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:35.671929 | orchestrator | Wednesday 04 February 2026 00:23:35 +0000 (0:00:01.119) 0:00:12.330 **** 2026-02-04 00:23:35.671948 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCtNFmPayETsWKU45kGggFQAvytguzSTy0/pGudHMKgXAJkNXGiFXr9X1/H8c+fT9sVr8bhR1Kut96X3STCxO5o=) 2026-02-04 00:23:47.335624 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWDF5C8ey4sSihU7ybfN5hXMGnKZLPxF1vp0kETXt6WooqDvA9L+0OXtvgPOkUZd7GFlfv6JJh42e9a89F7b+EN4ZEAGoWC4UdVqeXTRdXjVZsi61H5obCxlmR9FJhQXc2FI+lbZfNxOBaV5TKNkKZmI1O8CP3Q0CYHVuNFcHY6Hx0tWM865yBIiuQu0oir1fjRJXELioUHf6OCK79pfHCaI79CrZ57GybsOTHwsFtdxyVxlz+v3zuvf5l25JdWI1IN0zl9VQvPaF3LQeSOTY+V6brfSXOjSDI/ux5nrzJcPu57gXdTe7CFMFDOkbjjzkyRQXx1Rtsv8lxrroMfcfp0qCa3/xSA/CWgHdiZ4sq4h/eJjzpd31ghSOggL/z1O0RXDLNqfLEuzN84Fxp8w5l4m6lBtohRMyaaPQwyRSW9D8xYF1NwSYALiRGnOmA2uFdXAeoEZLnuU1Ru8n6eHcz9h7sHbFJt7fLkjnpRtB008bz/92ZgpxklvzVeeJkR1E=) 2026-02-04 00:23:47.335738 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILaWy5p2W+hLnqvq6NAOxZyPhl/gjDf0PXxgs3DnhaOT) 2026-02-04 00:23:47.335758 | orchestrator | 2026-02-04 00:23:47.335771 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:47.335784 | orchestrator | Wednesday 04 February 2026 00:23:36 +0000 (0:00:01.108) 0:00:13.438 **** 2026-02-04 00:23:47.335797 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRqMVAiwuER6Ql6W/E4kH19LCja8Pdpf+s3rPcGHp3WayzbPikt8dRquYKHwzdVy46msAXmX9iJ9JCEHg/UgVzeUNs9nzjIUEA4gFmtGcISOW6sBENsrhRah3mCVXnhc6bqI32p+ppOvXojwgJqFsQY30aqFHZBGYHFhj97vJLQ4JmTHuA43bboUgzECD0LiHkJJE28OjAbHH3Sl56i9qO6NZbTLAKRcT0lPLp3V+gaBI5+5KoUKr/La91kiN+BNMakr8QKct3vWZB5SsCBX0Hf7TOWBwL/kAq8X9X+i9GQoNSUIAAar69GqPGvJVBXZZTlQCAkSxYTG8f012BX6IqqUnlXduvWxHgOIFCxphwdDa7B/pG7cc2WWYSNGcuEZpAUv/y8cwM/xbLazNYlkIIXI2jvwg+GO9hGsSvgZ/e2XlF6IpToR4Zj4nGiG1+C9jcYBvkYbXspgpZEdLzdXlfTtZ3rXSYvrj5AGHE3Xzya8zvw5kZFNts2MrE4nnwcbU=) 2026-02-04 00:23:47.335809 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDD5HPNqqbx2Q7zQ7EozQV+ifQwCxB0BOSYEPLbgLkBE) 2026-02-04 00:23:47.335821 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOeli2wYm4vu7etVrjCmJDhQBvSI7wc/r4WL+q0DOmyjA+XvR6SJlUAO0JaDOr5VvOg/7mteNC7hJkpc7g+eFL8=) 2026-02-04 00:23:47.335834 | orchestrator | 2026-02-04 00:23:47.335845 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-04 00:23:47.335861 | orchestrator | Wednesday 04 February 2026 00:23:37 +0000 (0:00:01.156) 0:00:14.595 **** 2026-02-04 00:23:47.335881 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-04 00:23:47.335900 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-04 00:23:47.335919 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-04 00:23:47.335939 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-04 00:23:47.335959 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-04 00:23:47.336001 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-04 00:23:47.336044 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-04 00:23:47.336056 | orchestrator | 2026-02-04 00:23:47.336067 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-04 00:23:47.336080 | orchestrator | Wednesday 04 February 2026 00:23:42 +0000 (0:00:05.452) 0:00:20.047 **** 2026-02-04 00:23:47.336093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-04 00:23:47.336105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-04 00:23:47.336116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-04 00:23:47.336127 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-04 00:23:47.336138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-04 00:23:47.336150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-04 00:23:47.336163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-04 00:23:47.336175 | orchestrator | 2026-02-04 00:23:47.336205 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:47.336219 | orchestrator | Wednesday 04 February 2026 00:23:43 +0000 (0:00:00.193) 0:00:20.241 **** 2026-02-04 00:23:47.336233 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICayi6AbzAZRLij2vWDD5AgzW0PPkMM65tI4wv/tk/Vg) 2026-02-04 00:23:47.336250 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDd3EwnsAUt/odzIW0tNcx0EwIi2c6G2mRJYl4+wT0A0kZUoZ9VqPPTdJ5e+kUUYX7eNybUSfC8sqlyobiSj+W5XC3KWO+SE+sZc8yu1KKSGDFIFHnInpJY7T/2VxhiL+Dq2wB8rWwPGs8DWnuCSUfZqNuJwRb7RHuTOy0/wGZ0vBXbRstHGHHrA4giRnDtqlnE61e1oH876L94ABmohrzeEPi8GhSgDjIov6TAcCMg5VCs+KxKbYHIKYJQ7FeEfCObVf+1AtkxRjSq7NMqyHyFwPxYpWbBLL/19p5JsliZBYHKLFqcfiptqldSm8BHbBPxdOEmsljAq3AbTkjqzTIXKUmyHxTn9ePUYPTKJP7/sXfo7lmHsLV1wdgYRp8VLmtQJ8Yl9pCaptVaOYg7mG2NFW8/WuZkFiW45bnbkXsXl3J5x/f3lr22c0P8/2GInD0Qge4PgRsJ0EzIbtTjHJaZrOd12xrliiEDQBdeDLGCUiliVHBwXS2QI04A0bnR+j8=) 2026-02-04 00:23:47.336264 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEquUKFJAb6b9uBJ6eug5Wq5nuN6kET6a8ny/jLNDCyi+CZIyjV5RMNGC0Norp/BjpyW8GuHhUz5lT2BAHf0PHY=) 2026-02-04 00:23:47.336277 | orchestrator | 2026-02-04 00:23:47.336290 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:47.336302 | orchestrator | Wednesday 04 February 2026 00:23:44 +0000 (0:00:01.099) 0:00:21.340 **** 2026-02-04 00:23:47.336315 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpz//6IpRHELtgyNAg+jWHmoz+5exjXz//PpJlrZ1qe) 2026-02-04 00:23:47.336328 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAKuM2viW+aSKDRhfPwK9r0AXzIaagTow3w5iqH8ohNGIDQ3syX2qJhCp+1PZVp75wQSRu9Iu2e3XJ1WOT7Kj1XmtgwMkt9/J+WkDSSIhFtEdsnCKp/cNKdVpBvX25P1JNp3XWF65lcviQ4XqKDzL3H2Dzps6eVfs9l9syZfR14fUX4JxbDMOgaxGblwkqqo23b1NyJ7XiRKWsSzjKqyJiyqj7EE8QvYkUl2PA4NJGifIbiTmkjQ+63NmtJdLSmF8vGVMbO3BILGw2ZO12ijONv5sQrdWag0SaQmNLR+MwHcOdU+49GYEtagJE0BY+5szQiGBMpfSPpwlFtl9e9MiEHGdA1iDY9jCkFfX8vndhKRH1QTm5J46Ydm4Z9u8CqtNzKF/vgVBRr/2XWYOob0cekLjDBPFBFadxvq8dqZq+QoGj028dky+h+XcPL0XE664MEdcAqvn+J+JjkY4F5Ynqxkqcr1VR9Y5g1eMH7o5brHH+LqUjq6Q4zLCOIj+tbLU=) 2026-02-04 00:23:47.336349 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO2svct6n7ersVrqVok6Iy8xTNx0p9HG7sOSaCGkp3CQNy7YK7N93jQ1X2xAzHwLVlxHAj5eLD0Iiw+7/Q5MJ+g=) 2026-02-04 00:23:47.336417 | orchestrator | 2026-02-04 00:23:47.336431 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:47.336442 | orchestrator | Wednesday 04 February 2026 00:23:45 +0000 (0:00:01.147) 0:00:22.488 **** 2026-02-04 00:23:47.336454 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDy+3wPdok5Kbvfq16r6KJI/EQj0iKmrIkkvYDl2v0N1U2g1cREAkz6Hu3PgUGAXsZmqws1RJHCM3C4nse52dsCtAngMW6E7s+Uv1JWHuusvNF7AGwtZkqL8gUyJIWk1jRfzSTwklnJd6sP49WNdcVohMar8Fl6te7+GCZdkVjEEg3SGdkjqNC0jfHFZPOiCA6De+h7X+TspR9a9niajP7o5qgfP2cn7W4STttSBTZo4Ne/qy2x+MD4pLuDEz1a47AF9avC2aSqlVHd/rrbfFDAseJD8g/LsW+qhMQk53K/8Lh2+2H+DOQTQVT+sy/v63TIv4x0YNHqDsOxPWKeC9FJe9yjkVRvVPPkG411aOiXuMNRAuXLDuik1DC6Ul8eGzTGtpYelfu+ECXAXXd2DeTZLglBEEUsWYqPsNXIzfrKEhQ/xyR2nlycz74J1L+eDTcX4ah2awmhRX26PhCsPBFD2wD9qsLvHmGyN5jJynXNCkazksubpGt5U44OoSO02WE=) 2026-02-04 00:23:47.336465 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID6UX19dOpsHeko034x1UkSovWVI4Ul8wlxwISOf/Ocu) 2026-02-04 00:23:47.336477 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCL9nKZ5qwbXyz9oAz0pjy2F/g/KIa4T2bJG+ZRBAGNElxBMgGnwzqb+SsQaGDa2fx/EVOJDWmALKqtWx9YfhxE=) 2026-02-04 00:23:47.336487 | orchestrator | 2026-02-04 00:23:47.336498 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:47.336509 | orchestrator | Wednesday 04 February 2026 00:23:46 +0000 (0:00:01.134) 0:00:23.622 **** 2026-02-04 00:23:47.336520 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIX4cZNTzGIonEfK4F+EeCb2yQuBekZzncERsdss+0u2) 2026-02-04 00:23:47.336553 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+7wrs2y7Asjrdy8wXQu4l3dIyr6z6n0txArDj+EF9ZDt0MItnVhAmiCLJU9m25/cIQyxzBPjqcFGN6lmJ2EWxgcYQIsURhQCoXbzCLC+5tgw0rc8oyWGjbScO6ZA77dfXfrBQDmES6BEddclCS98z2ML6vW1WeAW1rQDjpX3wLmFFgy2l3iyvaib/a7IrVMPbCikpR/jiZ6RQavIrgZwSvjzUVKzpSw7iNlQLvAea1hbYHOk5gYndTF/Xm7tcLnCRFF5ItNfRTqISCJkU2bZ+14rZ4/ge51f4Bl1aEG52v7Kwh41X18Bd9Z4rqc+Eqhtq6UYGilFRl6IFfNQFypnC/DUwp7CEqfIctPQ7n89EoF3A/Uij6d+0jRzHfaTIxX6Qd8I0mTJC1vNVB9E9/6nBNym5ogZUd6+bCQoytuQ+YxCG0zyhpK883dguXICKsNmCwDglQ2zoPdLlURNGMcKHihO2ws7YFqg4h5/n9OyuHZpkW5Ba2zPZeyXaRtsT7Gc=) 2026-02-04 00:23:52.469756 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAsYWXaCrV+k9B74LMljwqVafRJ5Qk10aqkU8x9ubNEqNNmPSaq9uOYjEx2tqXKr5i4YZbv8cJO8Fl/ttLjFWJM=) 2026-02-04 00:23:52.469843 | orchestrator | 2026-02-04 00:23:52.469859 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:52.469873 | orchestrator | Wednesday 04 February 2026 00:23:47 +0000 (0:00:01.183) 0:00:24.806 **** 2026-02-04 00:23:52.469885 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEixqV5H0RvAOQfMQn0zRlyDOO0lxRzuwEYGoChx/Vy+BQZgDapCqPpToDEzDqQ4LFSaAC2qZM+XciF63CfovOk=) 2026-02-04 00:23:52.469898 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7zoBNi62miKI7pEIxqsqj4lc3vyDfh+DENEV2y9DMQU6tlWX9/EBLzIE9mvgAMTSp1fNtah3dBR9vpYk5Iu///wxQ5WD6Mfo5HPaxaungtysK40h4uw/7JgNm3pdpsaTsPz0DVBhpJ8m5vev7bzlZ2ztUHwIuDx225UvH2EfwiDYsl6udQN3VlLV4rYIQUYjFHnsjr9uB9aw/jm4ktD5bLFst/qmTlsLP9o/UnuOIExoqMl7iteVNfSv3rCwZIrEupTNqY11FnsvKHg0GsB/gGEZtAMonZpztClow6sucirWBtMAt6bf+U4rmooizuhHyVA5WDTwFZAxNQCo3SbevpgHOPADGk4OZJyCuiptEgtCGSTw6Q7X9GILfqk/X6wi1cRD5lr/6JmLFfPLGuZDyKx9EWch1e+0FyXVeyOxXPFuWx+x4qWGIvgnRI1nIlFpecgDI30mHLp5UKThyo7BqFjyNCc33STNi5z03himLY8/maKenKH0iKSRecggz1vs=) 2026-02-04 00:23:52.469939 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPvCV9z3lwlRPPHzKr58mgCmsKem4OnI6AP0+O5MtFi1) 2026-02-04 00:23:52.469953 | orchestrator | 2026-02-04 00:23:52.469988 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:52.469997 | orchestrator | Wednesday 04 February 2026 00:23:48 +0000 (0:00:01.166) 0:00:25.972 **** 2026-02-04 00:23:52.470004 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILaWy5p2W+hLnqvq6NAOxZyPhl/gjDf0PXxgs3DnhaOT) 2026-02-04 00:23:52.470011 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWDF5C8ey4sSihU7ybfN5hXMGnKZLPxF1vp0kETXt6WooqDvA9L+0OXtvgPOkUZd7GFlfv6JJh42e9a89F7b+EN4ZEAGoWC4UdVqeXTRdXjVZsi61H5obCxlmR9FJhQXc2FI+lbZfNxOBaV5TKNkKZmI1O8CP3Q0CYHVuNFcHY6Hx0tWM865yBIiuQu0oir1fjRJXELioUHf6OCK79pfHCaI79CrZ57GybsOTHwsFtdxyVxlz+v3zuvf5l25JdWI1IN0zl9VQvPaF3LQeSOTY+V6brfSXOjSDI/ux5nrzJcPu57gXdTe7CFMFDOkbjjzkyRQXx1Rtsv8lxrroMfcfp0qCa3/xSA/CWgHdiZ4sq4h/eJjzpd31ghSOggL/z1O0RXDLNqfLEuzN84Fxp8w5l4m6lBtohRMyaaPQwyRSW9D8xYF1NwSYALiRGnOmA2uFdXAeoEZLnuU1Ru8n6eHcz9h7sHbFJt7fLkjnpRtB008bz/92ZgpxklvzVeeJkR1E=) 2026-02-04 00:23:52.470058 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCtNFmPayETsWKU45kGggFQAvytguzSTy0/pGudHMKgXAJkNXGiFXr9X1/H8c+fT9sVr8bhR1Kut96X3STCxO5o=) 2026-02-04 00:23:52.470065 | orchestrator | 2026-02-04 00:23:52.470072 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:23:52.470079 | orchestrator | Wednesday 04 February 2026 00:23:50 +0000 (0:00:01.113) 0:00:27.086 **** 2026-02-04 00:23:52.470086 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRqMVAiwuER6Ql6W/E4kH19LCja8Pdpf+s3rPcGHp3WayzbPikt8dRquYKHwzdVy46msAXmX9iJ9JCEHg/UgVzeUNs9nzjIUEA4gFmtGcISOW6sBENsrhRah3mCVXnhc6bqI32p+ppOvXojwgJqFsQY30aqFHZBGYHFhj97vJLQ4JmTHuA43bboUgzECD0LiHkJJE28OjAbHH3Sl56i9qO6NZbTLAKRcT0lPLp3V+gaBI5+5KoUKr/La91kiN+BNMakr8QKct3vWZB5SsCBX0Hf7TOWBwL/kAq8X9X+i9GQoNSUIAAar69GqPGvJVBXZZTlQCAkSxYTG8f012BX6IqqUnlXduvWxHgOIFCxphwdDa7B/pG7cc2WWYSNGcuEZpAUv/y8cwM/xbLazNYlkIIXI2jvwg+GO9hGsSvgZ/e2XlF6IpToR4Zj4nGiG1+C9jcYBvkYbXspgpZEdLzdXlfTtZ3rXSYvrj5AGHE3Xzya8zvw5kZFNts2MrE4nnwcbU=) 2026-02-04 00:23:52.470093 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOeli2wYm4vu7etVrjCmJDhQBvSI7wc/r4WL+q0DOmyjA+XvR6SJlUAO0JaDOr5VvOg/7mteNC7hJkpc7g+eFL8=) 2026-02-04 00:23:52.470099 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDD5HPNqqbx2Q7zQ7EozQV+ifQwCxB0BOSYEPLbgLkBE) 2026-02-04 00:23:52.470106 | orchestrator | 2026-02-04 00:23:52.470113 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-04 00:23:52.470119 | orchestrator | Wednesday 04 February 2026 00:23:51 +0000 (0:00:01.116) 0:00:28.202 **** 2026-02-04 00:23:52.470127 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-04 00:23:52.470134 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-04 00:23:52.470141 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-04 00:23:52.470147 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-04 00:23:52.470168 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-04 00:23:52.470175 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-04 00:23:52.470181 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-04 00:23:52.470188 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:23:52.470195 | orchestrator | 2026-02-04 00:23:52.470201 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-04 00:23:52.470208 | orchestrator | Wednesday 04 February 2026 00:23:51 +0000 (0:00:00.180) 0:00:28.383 **** 2026-02-04 00:23:52.470223 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:23:52.470230 | orchestrator | 2026-02-04 00:23:52.470236 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-04 00:23:52.470243 | orchestrator | Wednesday 04 February 2026 00:23:51 +0000 (0:00:00.064) 0:00:28.447 **** 2026-02-04 00:23:52.470250 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:23:52.470256 | orchestrator | 2026-02-04 00:23:52.470263 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-04 00:23:52.470269 | orchestrator | Wednesday 04 February 2026 00:23:51 +0000 (0:00:00.050) 0:00:28.498 **** 2026-02-04 00:23:52.470276 | orchestrator | changed: [testbed-manager] 2026-02-04 00:23:52.470282 | orchestrator | 2026-02-04 00:23:52.470289 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:23:52.470298 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:23:52.470308 | orchestrator | 2026-02-04 00:23:52.470316 | orchestrator | 2026-02-04 00:23:52.470326 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:23:52.470338 | orchestrator | Wednesday 04 February 2026 00:23:52 +0000 (0:00:00.786) 0:00:29.285 **** 2026-02-04 00:23:52.470353 | orchestrator | =============================================================================== 2026-02-04 00:23:52.470389 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.31s 2026-02-04 00:23:52.470400 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.45s 2026-02-04 00:23:52.470412 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-02-04 00:23:52.470423 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-02-04 00:23:52.470433 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-02-04 00:23:52.470442 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-04 00:23:52.470453 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-04 00:23:52.470464 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-04 00:23:52.470475 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-04 00:23:52.470486 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-04 00:23:52.470496 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-04 00:23:52.470505 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-04 00:23:52.470520 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-04 00:23:52.470546 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-04 00:23:52.470557 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-04 00:23:52.470567 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-04 00:23:52.470577 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.79s 2026-02-04 00:23:52.470587 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-02-04 00:23:52.470599 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-02-04 00:23:52.470610 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-02-04 00:23:52.820250 | orchestrator | + osism apply squid 2026-02-04 00:24:04.954184 | orchestrator | 2026-02-04 00:24:04 | INFO  | Prepare task for execution of squid. 2026-02-04 00:24:05.041285 | orchestrator | 2026-02-04 00:24:05 | INFO  | Task 77bc1ac9-8d6c-4f2a-bc58-c2c5ff10555c (squid) was prepared for execution. 2026-02-04 00:24:05.041426 | orchestrator | 2026-02-04 00:24:05 | INFO  | It takes a moment until task 77bc1ac9-8d6c-4f2a-bc58-c2c5ff10555c (squid) has been started and output is visible here. 2026-02-04 00:26:00.988057 | orchestrator | 2026-02-04 00:26:00.988192 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-04 00:26:00.988214 | orchestrator | 2026-02-04 00:26:00.988227 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-04 00:26:00.988239 | orchestrator | Wednesday 04 February 2026 00:24:09 +0000 (0:00:00.163) 0:00:00.163 **** 2026-02-04 00:26:00.988251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:26:00.988263 | orchestrator | 2026-02-04 00:26:00.988313 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-04 00:26:00.988324 | orchestrator | Wednesday 04 February 2026 00:24:09 +0000 (0:00:00.101) 0:00:00.265 **** 2026-02-04 00:26:00.988335 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:00.988348 | orchestrator | 2026-02-04 00:26:00.988359 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-04 00:26:00.988370 | orchestrator | Wednesday 04 February 2026 00:24:11 +0000 (0:00:01.578) 0:00:01.843 **** 2026-02-04 00:26:00.988381 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-04 00:26:00.988392 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-04 00:26:00.988403 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-04 00:26:00.988414 | orchestrator | 2026-02-04 00:26:00.988425 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-04 00:26:00.988436 | orchestrator | Wednesday 04 February 2026 00:24:12 +0000 (0:00:01.208) 0:00:03.052 **** 2026-02-04 00:26:00.988447 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-04 00:26:00.988458 | orchestrator | 2026-02-04 00:26:00.988469 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-04 00:26:00.988480 | orchestrator | Wednesday 04 February 2026 00:24:13 +0000 (0:00:01.123) 0:00:04.175 **** 2026-02-04 00:26:00.988490 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:00.988501 | orchestrator | 2026-02-04 00:26:00.988512 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-04 00:26:00.988523 | orchestrator | Wednesday 04 February 2026 00:24:13 +0000 (0:00:00.383) 0:00:04.559 **** 2026-02-04 00:26:00.988534 | orchestrator | changed: [testbed-manager] 2026-02-04 00:26:00.988545 | orchestrator | 2026-02-04 00:26:00.988555 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-04 00:26:00.988566 | orchestrator | Wednesday 04 February 2026 00:24:14 +0000 (0:00:00.955) 0:00:05.515 **** 2026-02-04 00:26:00.988580 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-04 00:26:00.988594 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:00.988606 | orchestrator | 2026-02-04 00:26:00.988618 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-04 00:26:00.988631 | orchestrator | Wednesday 04 February 2026 00:24:47 +0000 (0:00:33.235) 0:00:38.750 **** 2026-02-04 00:26:00.988644 | orchestrator | changed: [testbed-manager] 2026-02-04 00:26:00.988657 | orchestrator | 2026-02-04 00:26:00.988689 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-04 00:26:00.988702 | orchestrator | Wednesday 04 February 2026 00:24:59 +0000 (0:00:11.963) 0:00:50.714 **** 2026-02-04 00:26:00.988716 | orchestrator | Pausing for 60 seconds 2026-02-04 00:26:00.988730 | orchestrator | changed: [testbed-manager] 2026-02-04 00:26:00.988743 | orchestrator | 2026-02-04 00:26:00.988755 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-04 00:26:00.988767 | orchestrator | Wednesday 04 February 2026 00:25:59 +0000 (0:01:00.087) 0:01:50.802 **** 2026-02-04 00:26:00.988780 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:00.988793 | orchestrator | 2026-02-04 00:26:00.988806 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-04 00:26:00.988844 | orchestrator | Wednesday 04 February 2026 00:26:00 +0000 (0:00:00.065) 0:01:50.867 **** 2026-02-04 00:26:00.988857 | orchestrator | changed: [testbed-manager] 2026-02-04 00:26:00.988870 | orchestrator | 2026-02-04 00:26:00.988883 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:26:00.988896 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:26:00.988909 | orchestrator | 2026-02-04 00:26:00.988920 | orchestrator | 2026-02-04 00:26:00.988931 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:26:00.988942 | orchestrator | Wednesday 04 February 2026 00:26:00 +0000 (0:00:00.633) 0:01:51.501 **** 2026-02-04 00:26:00.988953 | orchestrator | =============================================================================== 2026-02-04 00:26:00.988964 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-04 00:26:00.988974 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.24s 2026-02-04 00:26:00.988985 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.96s 2026-02-04 00:26:00.988996 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.58s 2026-02-04 00:26:00.989007 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.21s 2026-02-04 00:26:00.989017 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2026-02-04 00:26:00.989028 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.96s 2026-02-04 00:26:00.989039 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-02-04 00:26:00.989050 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-02-04 00:26:00.989060 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-02-04 00:26:00.989071 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-02-04 00:26:01.344754 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-04 00:26:01.344854 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-02-04 00:26:01.348499 | orchestrator | + set -e 2026-02-04 00:26:01.348669 | orchestrator | + NAMESPACE=kolla 2026-02-04 00:26:01.348691 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-04 00:26:01.355321 | orchestrator | ++ semver latest 9.0.0 2026-02-04 00:26:01.403408 | orchestrator | + [[ -1 -lt 0 ]] 2026-02-04 00:26:01.403500 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-04 00:26:01.403910 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-04 00:26:13.487800 | orchestrator | 2026-02-04 00:26:13 | INFO  | Prepare task for execution of operator. 2026-02-04 00:26:13.569225 | orchestrator | 2026-02-04 00:26:13 | INFO  | Task a50e9b30-b333-44f6-90ce-5a30de7e2d8b (operator) was prepared for execution. 2026-02-04 00:26:13.569426 | orchestrator | 2026-02-04 00:26:13 | INFO  | It takes a moment until task a50e9b30-b333-44f6-90ce-5a30de7e2d8b (operator) has been started and output is visible here. 2026-02-04 00:26:30.446243 | orchestrator | 2026-02-04 00:26:30.446427 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-04 00:26:30.446456 | orchestrator | 2026-02-04 00:26:30.446477 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:26:30.446496 | orchestrator | Wednesday 04 February 2026 00:26:17 +0000 (0:00:00.162) 0:00:00.162 **** 2026-02-04 00:26:30.446517 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:30.446530 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:30.446540 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:30.446549 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:30.446559 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:30.446572 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:30.446582 | orchestrator | 2026-02-04 00:26:30.446593 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-04 00:26:30.446628 | orchestrator | Wednesday 04 February 2026 00:26:21 +0000 (0:00:03.419) 0:00:03.581 **** 2026-02-04 00:26:30.446639 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:30.446648 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:30.446658 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:30.446668 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:30.446678 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:30.446687 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:30.446709 | orchestrator | 2026-02-04 00:26:30.446719 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-04 00:26:30.446729 | orchestrator | 2026-02-04 00:26:30.446738 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-04 00:26:30.446748 | orchestrator | Wednesday 04 February 2026 00:26:22 +0000 (0:00:00.931) 0:00:04.513 **** 2026-02-04 00:26:30.446758 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:30.446768 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:30.446777 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:30.446789 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:30.446800 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:30.446811 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:30.446824 | orchestrator | 2026-02-04 00:26:30.446836 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-04 00:26:30.446849 | orchestrator | Wednesday 04 February 2026 00:26:22 +0000 (0:00:00.196) 0:00:04.710 **** 2026-02-04 00:26:30.446862 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:30.446874 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:30.446886 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:30.446899 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:30.446937 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:30.446950 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:30.446963 | orchestrator | 2026-02-04 00:26:30.446975 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-04 00:26:30.446987 | orchestrator | Wednesday 04 February 2026 00:26:22 +0000 (0:00:00.173) 0:00:04.884 **** 2026-02-04 00:26:30.447000 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:30.447015 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:30.447027 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:26:30.447039 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:26:30.447052 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:26:30.447065 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:30.447077 | orchestrator | 2026-02-04 00:26:30.447090 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-04 00:26:30.447104 | orchestrator | Wednesday 04 February 2026 00:26:23 +0000 (0:00:00.638) 0:00:05.523 **** 2026-02-04 00:26:30.447116 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:30.447129 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:30.447141 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:30.447152 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:26:30.447163 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:26:30.447173 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:26:30.447184 | orchestrator | 2026-02-04 00:26:30.447195 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-04 00:26:30.447206 | orchestrator | Wednesday 04 February 2026 00:26:24 +0000 (0:00:00.840) 0:00:06.363 **** 2026-02-04 00:26:30.447217 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-04 00:26:30.447228 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-04 00:26:30.447239 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-04 00:26:30.447285 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-04 00:26:30.447304 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-04 00:26:30.447323 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-04 00:26:30.447342 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-04 00:26:30.447361 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-04 00:26:30.447395 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-04 00:26:30.447414 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-04 00:26:30.447435 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-04 00:26:30.447455 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-04 00:26:30.447472 | orchestrator | 2026-02-04 00:26:30.447483 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-04 00:26:30.447495 | orchestrator | Wednesday 04 February 2026 00:26:25 +0000 (0:00:01.291) 0:00:07.655 **** 2026-02-04 00:26:30.447505 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:26:30.447516 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:30.447527 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:30.447537 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:26:30.447548 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:30.447558 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:26:30.447569 | orchestrator | 2026-02-04 00:26:30.447579 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-04 00:26:30.447591 | orchestrator | Wednesday 04 February 2026 00:26:26 +0000 (0:00:01.326) 0:00:08.982 **** 2026-02-04 00:26:30.447602 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:26:30.447613 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:26:30.447624 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:26:30.447634 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:26:30.447646 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:26:30.447679 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:26:30.447691 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-04 00:26:30.447701 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-04 00:26:30.447712 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-04 00:26:30.447723 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-04 00:26:30.447734 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-04 00:26:30.447745 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-04 00:26:30.447755 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:26:30.447766 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-04 00:26:30.447777 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-04 00:26:30.447788 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-04 00:26:30.447799 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:26:30.447809 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:26:30.447820 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:26:30.447831 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:26:30.447841 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:26:30.447852 | orchestrator | 2026-02-04 00:26:30.447863 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-04 00:26:30.447876 | orchestrator | Wednesday 04 February 2026 00:26:28 +0000 (0:00:01.284) 0:00:10.266 **** 2026-02-04 00:26:30.447895 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:26:30.447915 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:26:30.447934 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:26:30.447961 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:26:30.447980 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:26:30.447998 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:26:30.448016 | orchestrator | 2026-02-04 00:26:30.448034 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-04 00:26:30.448066 | orchestrator | Wednesday 04 February 2026 00:26:28 +0000 (0:00:00.175) 0:00:10.441 **** 2026-02-04 00:26:30.448085 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:26:30.448104 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:26:30.448118 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:26:30.448128 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:26:30.448139 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:26:30.448150 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:26:30.448161 | orchestrator | 2026-02-04 00:26:30.448172 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-04 00:26:30.448183 | orchestrator | Wednesday 04 February 2026 00:26:28 +0000 (0:00:00.213) 0:00:10.655 **** 2026-02-04 00:26:30.448194 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:30.448204 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:30.448215 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:30.448226 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:26:30.448236 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:26:30.448274 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:26:30.448287 | orchestrator | 2026-02-04 00:26:30.448298 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-04 00:26:30.448309 | orchestrator | Wednesday 04 February 2026 00:26:29 +0000 (0:00:00.660) 0:00:11.316 **** 2026-02-04 00:26:30.448320 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:26:30.448331 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:26:30.448342 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:26:30.448353 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:26:30.448364 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:26:30.448374 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:26:30.448385 | orchestrator | 2026-02-04 00:26:30.448396 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-04 00:26:30.448407 | orchestrator | Wednesday 04 February 2026 00:26:29 +0000 (0:00:00.183) 0:00:11.499 **** 2026-02-04 00:26:30.448418 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-04 00:26:30.448429 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:30.448439 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 00:26:30.448450 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 00:26:30.448461 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:30.448472 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:26:30.448483 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-04 00:26:30.448493 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:30.448504 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 00:26:30.448515 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:26:30.448526 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 00:26:30.448537 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:26:30.448547 | orchestrator | 2026-02-04 00:26:30.448558 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-04 00:26:30.448569 | orchestrator | Wednesday 04 February 2026 00:26:30 +0000 (0:00:00.829) 0:00:12.329 **** 2026-02-04 00:26:30.448580 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:26:30.448591 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:26:30.448601 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:26:30.448612 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:26:30.448623 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:26:30.448633 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:26:30.448644 | orchestrator | 2026-02-04 00:26:30.448655 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-04 00:26:30.448666 | orchestrator | Wednesday 04 February 2026 00:26:30 +0000 (0:00:00.194) 0:00:12.524 **** 2026-02-04 00:26:30.448677 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:26:30.448688 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:26:30.448699 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:26:30.448750 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:26:30.448774 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:26:31.896736 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:26:31.896820 | orchestrator | 2026-02-04 00:26:31.896831 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-04 00:26:31.896841 | orchestrator | Wednesday 04 February 2026 00:26:30 +0000 (0:00:00.201) 0:00:12.725 **** 2026-02-04 00:26:31.896848 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:26:31.896855 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:26:31.896861 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:26:31.896868 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:26:31.896875 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:26:31.896882 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:26:31.896888 | orchestrator | 2026-02-04 00:26:31.896895 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-04 00:26:31.896902 | orchestrator | Wednesday 04 February 2026 00:26:30 +0000 (0:00:00.189) 0:00:12.915 **** 2026-02-04 00:26:31.896908 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:31.896915 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:31.896922 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:31.896928 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:26:31.896935 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:26:31.896941 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:26:31.896948 | orchestrator | 2026-02-04 00:26:31.896955 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-04 00:26:31.896961 | orchestrator | Wednesday 04 February 2026 00:26:31 +0000 (0:00:00.704) 0:00:13.619 **** 2026-02-04 00:26:31.896968 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:26:31.896974 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:26:31.896981 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:26:31.896987 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:26:31.896994 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:26:31.897000 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:26:31.897007 | orchestrator | 2026-02-04 00:26:31.897013 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:26:31.897022 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:26:31.897048 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:26:31.897055 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:26:31.897062 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:26:31.897069 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:26:31.897075 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:26:31.897082 | orchestrator | 2026-02-04 00:26:31.897088 | orchestrator | 2026-02-04 00:26:31.897095 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:26:31.897102 | orchestrator | Wednesday 04 February 2026 00:26:31 +0000 (0:00:00.250) 0:00:13.870 **** 2026-02-04 00:26:31.897109 | orchestrator | =============================================================================== 2026-02-04 00:26:31.897115 | orchestrator | Gathering Facts --------------------------------------------------------- 3.42s 2026-02-04 00:26:31.897122 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.33s 2026-02-04 00:26:31.897129 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.29s 2026-02-04 00:26:31.897154 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2026-02-04 00:26:31.897161 | orchestrator | Do not require tty for all users ---------------------------------------- 0.93s 2026-02-04 00:26:31.897168 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-02-04 00:26:31.897174 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.83s 2026-02-04 00:26:31.897181 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.70s 2026-02-04 00:26:31.897188 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.66s 2026-02-04 00:26:31.897194 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2026-02-04 00:26:31.897202 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-02-04 00:26:31.897208 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.21s 2026-02-04 00:26:31.897215 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.20s 2026-02-04 00:26:31.897222 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2026-02-04 00:26:31.897228 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2026-02-04 00:26:31.897235 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.19s 2026-02-04 00:26:31.897242 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-02-04 00:26:31.897292 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-02-04 00:26:31.897301 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-02-04 00:26:32.261572 | orchestrator | + osism apply --environment custom facts 2026-02-04 00:26:34.436470 | orchestrator | 2026-02-04 00:26:34 | INFO  | Trying to run play facts in environment custom 2026-02-04 00:26:44.462963 | orchestrator | 2026-02-04 00:26:44 | INFO  | Prepare task for execution of facts. 2026-02-04 00:26:44.535762 | orchestrator | 2026-02-04 00:26:44 | INFO  | Task d733a1e6-dc84-4c34-a398-2561d71f868c (facts) was prepared for execution. 2026-02-04 00:26:44.535852 | orchestrator | 2026-02-04 00:26:44 | INFO  | It takes a moment until task d733a1e6-dc84-4c34-a398-2561d71f868c (facts) has been started and output is visible here. 2026-02-04 00:27:31.970214 | orchestrator | 2026-02-04 00:27:31.970380 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-04 00:27:31.970400 | orchestrator | 2026-02-04 00:27:31.970412 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-04 00:27:31.970424 | orchestrator | Wednesday 04 February 2026 00:26:49 +0000 (0:00:00.084) 0:00:00.084 **** 2026-02-04 00:27:31.970436 | orchestrator | ok: [testbed-manager] 2026-02-04 00:27:31.970448 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:27:31.970460 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:27:31.970471 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:27:31.970482 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:27:31.970493 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:27:31.970505 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:27:31.970540 | orchestrator | 2026-02-04 00:27:31.970560 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-04 00:27:31.970578 | orchestrator | Wednesday 04 February 2026 00:26:50 +0000 (0:00:01.402) 0:00:01.487 **** 2026-02-04 00:27:31.970595 | orchestrator | ok: [testbed-manager] 2026-02-04 00:27:31.970611 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:27:31.970629 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:27:31.970648 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:27:31.970667 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:27:31.970708 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:27:31.970730 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:27:31.970780 | orchestrator | 2026-02-04 00:27:31.970794 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-04 00:27:31.970807 | orchestrator | 2026-02-04 00:27:31.970821 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-04 00:27:31.970834 | orchestrator | Wednesday 04 February 2026 00:26:51 +0000 (0:00:01.331) 0:00:02.818 **** 2026-02-04 00:27:31.970847 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:27:31.970859 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:27:31.970872 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:27:31.970885 | orchestrator | 2026-02-04 00:27:31.970898 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-04 00:27:31.970911 | orchestrator | Wednesday 04 February 2026 00:26:51 +0000 (0:00:00.109) 0:00:02.927 **** 2026-02-04 00:27:31.970924 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:27:31.970937 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:27:31.970949 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:27:31.970960 | orchestrator | 2026-02-04 00:27:31.970973 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-04 00:27:31.970986 | orchestrator | Wednesday 04 February 2026 00:26:52 +0000 (0:00:00.225) 0:00:03.153 **** 2026-02-04 00:27:31.970998 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:27:31.971011 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:27:31.971022 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:27:31.971033 | orchestrator | 2026-02-04 00:27:31.971044 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-04 00:27:31.971055 | orchestrator | Wednesday 04 February 2026 00:26:52 +0000 (0:00:00.267) 0:00:03.421 **** 2026-02-04 00:27:31.971067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:27:31.971079 | orchestrator | 2026-02-04 00:27:31.971090 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-04 00:27:31.971101 | orchestrator | Wednesday 04 February 2026 00:26:52 +0000 (0:00:00.140) 0:00:03.561 **** 2026-02-04 00:27:31.971112 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:27:31.971123 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:27:31.971134 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:27:31.971144 | orchestrator | 2026-02-04 00:27:31.971155 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-04 00:27:31.971166 | orchestrator | Wednesday 04 February 2026 00:26:53 +0000 (0:00:00.454) 0:00:04.016 **** 2026-02-04 00:27:31.971177 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:27:31.971188 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:27:31.971199 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:27:31.971209 | orchestrator | 2026-02-04 00:27:31.971220 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-04 00:27:31.971302 | orchestrator | Wednesday 04 February 2026 00:26:53 +0000 (0:00:00.162) 0:00:04.179 **** 2026-02-04 00:27:31.971314 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:27:31.971325 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:27:31.971336 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:27:31.971347 | orchestrator | 2026-02-04 00:27:31.971358 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-04 00:27:31.971369 | orchestrator | Wednesday 04 February 2026 00:26:54 +0000 (0:00:01.147) 0:00:05.326 **** 2026-02-04 00:27:31.971380 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:27:31.971391 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:27:31.971402 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:27:31.971413 | orchestrator | 2026-02-04 00:27:31.971424 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-04 00:27:31.971435 | orchestrator | Wednesday 04 February 2026 00:26:54 +0000 (0:00:00.515) 0:00:05.841 **** 2026-02-04 00:27:31.971446 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:27:31.971457 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:27:31.971468 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:27:31.971489 | orchestrator | 2026-02-04 00:27:31.971500 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-04 00:27:31.971511 | orchestrator | Wednesday 04 February 2026 00:26:56 +0000 (0:00:01.213) 0:00:07.055 **** 2026-02-04 00:27:31.971522 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:27:31.971533 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:27:31.971543 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:27:31.971554 | orchestrator | 2026-02-04 00:27:31.971565 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-04 00:27:31.971576 | orchestrator | Wednesday 04 February 2026 00:27:13 +0000 (0:00:17.396) 0:00:24.451 **** 2026-02-04 00:27:31.971587 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:27:31.971598 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:27:31.971609 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:27:31.971620 | orchestrator | 2026-02-04 00:27:31.971631 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-04 00:27:31.971663 | orchestrator | Wednesday 04 February 2026 00:27:13 +0000 (0:00:00.090) 0:00:24.542 **** 2026-02-04 00:27:31.971674 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:27:31.971685 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:27:31.971696 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:27:31.971707 | orchestrator | 2026-02-04 00:27:31.971718 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-04 00:27:31.971729 | orchestrator | Wednesday 04 February 2026 00:27:21 +0000 (0:00:08.378) 0:00:32.920 **** 2026-02-04 00:27:31.971740 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:27:31.971751 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:27:31.971762 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:27:31.971773 | orchestrator | 2026-02-04 00:27:31.971784 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-04 00:27:31.971795 | orchestrator | Wednesday 04 February 2026 00:27:22 +0000 (0:00:00.523) 0:00:33.443 **** 2026-02-04 00:27:31.971806 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-04 00:27:31.971817 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-04 00:27:31.971828 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-04 00:27:31.971840 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-04 00:27:31.971851 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-04 00:27:31.971862 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-04 00:27:31.971873 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-04 00:27:31.971884 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-04 00:27:31.971894 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-04 00:27:31.971905 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-04 00:27:31.971916 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-04 00:27:31.971927 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-04 00:27:31.971938 | orchestrator | 2026-02-04 00:27:31.971948 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-04 00:27:31.971959 | orchestrator | Wednesday 04 February 2026 00:27:26 +0000 (0:00:03.738) 0:00:37.182 **** 2026-02-04 00:27:31.971970 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:27:31.971981 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:27:31.971992 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:27:31.972003 | orchestrator | 2026-02-04 00:27:31.972014 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 00:27:31.972025 | orchestrator | 2026-02-04 00:27:31.972036 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:27:31.972047 | orchestrator | Wednesday 04 February 2026 00:27:27 +0000 (0:00:01.704) 0:00:38.887 **** 2026-02-04 00:27:31.972065 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:27:31.972076 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:27:31.972087 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:27:31.972098 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:27:31.972108 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:27:31.972161 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:27:31.972173 | orchestrator | ok: [testbed-manager] 2026-02-04 00:27:31.972184 | orchestrator | 2026-02-04 00:27:31.972195 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:27:31.972207 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:27:31.972218 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:27:31.972254 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:27:31.972267 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:27:31.972278 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:27:31.972289 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:27:31.972300 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:27:31.972310 | orchestrator | 2026-02-04 00:27:31.972321 | orchestrator | 2026-02-04 00:27:31.972332 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:27:31.972343 | orchestrator | Wednesday 04 February 2026 00:27:31 +0000 (0:00:04.033) 0:00:42.921 **** 2026-02-04 00:27:31.972354 | orchestrator | =============================================================================== 2026-02-04 00:27:31.972365 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.40s 2026-02-04 00:27:31.972376 | orchestrator | Install required packages (Debian) -------------------------------------- 8.38s 2026-02-04 00:27:31.972387 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.03s 2026-02-04 00:27:31.972397 | orchestrator | Copy fact files --------------------------------------------------------- 3.74s 2026-02-04 00:27:31.972408 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.70s 2026-02-04 00:27:31.972419 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2026-02-04 00:27:31.972438 | orchestrator | Copy fact file ---------------------------------------------------------- 1.33s 2026-02-04 00:27:32.232654 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.21s 2026-02-04 00:27:32.232759 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.15s 2026-02-04 00:27:32.232781 | orchestrator | Create custom facts directory ------------------------------------------- 0.52s 2026-02-04 00:27:32.232802 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.52s 2026-02-04 00:27:32.232820 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-02-04 00:27:32.232839 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.27s 2026-02-04 00:27:32.232859 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2026-02-04 00:27:32.232878 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.16s 2026-02-04 00:27:32.232897 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-02-04 00:27:32.232943 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-02-04 00:27:32.232980 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-02-04 00:27:32.598116 | orchestrator | + osism apply bootstrap 2026-02-04 00:27:44.716988 | orchestrator | 2026-02-04 00:27:44 | INFO  | Prepare task for execution of bootstrap. 2026-02-04 00:27:44.803342 | orchestrator | 2026-02-04 00:27:44 | INFO  | Task f2aa7d70-7473-4930-a6ec-99de3cddfc08 (bootstrap) was prepared for execution. 2026-02-04 00:27:44.803441 | orchestrator | 2026-02-04 00:27:44 | INFO  | It takes a moment until task f2aa7d70-7473-4930-a6ec-99de3cddfc08 (bootstrap) has been started and output is visible here. 2026-02-04 00:28:03.373561 | orchestrator | 2026-02-04 00:28:03.373666 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-04 00:28:03.373683 | orchestrator | 2026-02-04 00:28:03.373694 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-04 00:28:03.373704 | orchestrator | Wednesday 04 February 2026 00:27:49 +0000 (0:00:00.140) 0:00:00.140 **** 2026-02-04 00:28:03.373714 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:03.373725 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:03.373735 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:03.373744 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:03.373754 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:03.373764 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:03.373773 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:03.373783 | orchestrator | 2026-02-04 00:28:03.373793 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 00:28:03.373802 | orchestrator | 2026-02-04 00:28:03.373812 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:28:03.373822 | orchestrator | Wednesday 04 February 2026 00:27:49 +0000 (0:00:00.271) 0:00:00.411 **** 2026-02-04 00:28:03.373832 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:03.373842 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:03.373852 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:03.373861 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:03.373871 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:03.373880 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:03.373890 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:03.373900 | orchestrator | 2026-02-04 00:28:03.373909 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-04 00:28:03.373919 | orchestrator | 2026-02-04 00:28:03.373929 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:28:03.373939 | orchestrator | Wednesday 04 February 2026 00:27:54 +0000 (0:00:04.935) 0:00:05.347 **** 2026-02-04 00:28:03.373949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:28:03.373960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:28:03.373970 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-04 00:28:03.373979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:28:03.373989 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-04 00:28:03.373998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-04 00:28:03.374008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-04 00:28:03.374081 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-04 00:28:03.374096 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-04 00:28:03.374109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 00:28:03.374120 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-04 00:28:03.374133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 00:28:03.374145 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-04 00:28:03.374158 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-04 00:28:03.374170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 00:28:03.374207 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-04 00:28:03.374219 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-04 00:28:03.374231 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-04 00:28:03.374243 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-04 00:28:03.374254 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-04 00:28:03.374266 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-04 00:28:03.374279 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-04 00:28:03.374290 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 00:28:03.374323 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:28:03.374334 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-04 00:28:03.374343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 00:28:03.374353 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-04 00:28:03.374362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 00:28:03.374372 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-04 00:28:03.374382 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:28:03.374391 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:28:03.374401 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-04 00:28:03.374411 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-04 00:28:03.374420 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-04 00:28:03.374430 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-04 00:28:03.374439 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 00:28:03.374449 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-04 00:28:03.374459 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-04 00:28:03.374468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 00:28:03.374478 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-04 00:28:03.374488 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:03.374497 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-04 00:28:03.374507 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-04 00:28:03.374517 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-04 00:28:03.374527 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-04 00:28:03.374537 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 00:28:03.374546 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:28:03.374573 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-04 00:28:03.374583 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-04 00:28:03.374593 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-04 00:28:03.374603 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-04 00:28:03.374612 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-04 00:28:03.374622 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-04 00:28:03.374631 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:28:03.374641 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-04 00:28:03.374650 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:28:03.374660 | orchestrator | 2026-02-04 00:28:03.374670 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-04 00:28:03.374679 | orchestrator | 2026-02-04 00:28:03.374689 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-04 00:28:03.374699 | orchestrator | Wednesday 04 February 2026 00:27:55 +0000 (0:00:00.630) 0:00:05.977 **** 2026-02-04 00:28:03.374708 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:03.374726 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:03.374736 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:03.374745 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:03.374755 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:03.374764 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:03.374774 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:03.374783 | orchestrator | 2026-02-04 00:28:03.374793 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-04 00:28:03.374803 | orchestrator | Wednesday 04 February 2026 00:27:56 +0000 (0:00:01.328) 0:00:07.306 **** 2026-02-04 00:28:03.374812 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:03.374822 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:03.374831 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:03.374840 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:03.374850 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:03.374859 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:03.374869 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:03.374878 | orchestrator | 2026-02-04 00:28:03.374888 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-04 00:28:03.374897 | orchestrator | Wednesday 04 February 2026 00:27:58 +0000 (0:00:01.455) 0:00:08.761 **** 2026-02-04 00:28:03.374908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:28:03.374920 | orchestrator | 2026-02-04 00:28:03.374929 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-04 00:28:03.374939 | orchestrator | Wednesday 04 February 2026 00:27:58 +0000 (0:00:00.327) 0:00:09.088 **** 2026-02-04 00:28:03.374949 | orchestrator | changed: [testbed-manager] 2026-02-04 00:28:03.374958 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:03.374968 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:03.374978 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:03.374987 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:03.374997 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:03.375006 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:03.375015 | orchestrator | 2026-02-04 00:28:03.375025 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-04 00:28:03.375034 | orchestrator | Wednesday 04 February 2026 00:28:00 +0000 (0:00:02.242) 0:00:11.331 **** 2026-02-04 00:28:03.375044 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:03.375055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:28:03.375067 | orchestrator | 2026-02-04 00:28:03.375076 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-04 00:28:03.375086 | orchestrator | Wednesday 04 February 2026 00:28:00 +0000 (0:00:00.318) 0:00:11.650 **** 2026-02-04 00:28:03.375095 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:03.375105 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:03.375114 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:03.375124 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:03.375133 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:03.375160 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:03.375170 | orchestrator | 2026-02-04 00:28:03.375181 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-04 00:28:03.375196 | orchestrator | Wednesday 04 February 2026 00:28:02 +0000 (0:00:01.070) 0:00:12.721 **** 2026-02-04 00:28:03.375211 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:03.375221 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:03.375231 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:03.375240 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:03.375249 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:03.375259 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:03.375274 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:03.375284 | orchestrator | 2026-02-04 00:28:03.375327 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-04 00:28:03.375353 | orchestrator | Wednesday 04 February 2026 00:28:02 +0000 (0:00:00.640) 0:00:13.361 **** 2026-02-04 00:28:03.375371 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:28:03.375387 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:28:03.375403 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:28:03.375413 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:28:03.375423 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:28:03.375432 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:28:03.375442 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:03.375451 | orchestrator | 2026-02-04 00:28:03.375461 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-04 00:28:03.375471 | orchestrator | Wednesday 04 February 2026 00:28:03 +0000 (0:00:00.537) 0:00:13.899 **** 2026-02-04 00:28:03.375481 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:28:03.375490 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:28:03.375508 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:28:16.415718 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:16.415826 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:28:16.415841 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:28:16.415851 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:28:16.415862 | orchestrator | 2026-02-04 00:28:16.415873 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-04 00:28:16.415889 | orchestrator | Wednesday 04 February 2026 00:28:03 +0000 (0:00:00.266) 0:00:14.166 **** 2026-02-04 00:28:16.415914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:28:16.415954 | orchestrator | 2026-02-04 00:28:16.415970 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-04 00:28:16.415986 | orchestrator | Wednesday 04 February 2026 00:28:03 +0000 (0:00:00.319) 0:00:14.485 **** 2026-02-04 00:28:16.416002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:28:16.416019 | orchestrator | 2026-02-04 00:28:16.416034 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-04 00:28:16.416050 | orchestrator | Wednesday 04 February 2026 00:28:04 +0000 (0:00:00.433) 0:00:14.919 **** 2026-02-04 00:28:16.416067 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.416083 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:16.416099 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:16.416114 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:16.416130 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:16.416147 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:16.416165 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:16.416181 | orchestrator | 2026-02-04 00:28:16.416200 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-04 00:28:16.416216 | orchestrator | Wednesday 04 February 2026 00:28:05 +0000 (0:00:01.379) 0:00:16.298 **** 2026-02-04 00:28:16.416227 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:28:16.416239 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:28:16.416250 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:28:16.416262 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:16.416274 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:28:16.416284 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:28:16.416294 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:28:16.416304 | orchestrator | 2026-02-04 00:28:16.416314 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-04 00:28:16.416382 | orchestrator | Wednesday 04 February 2026 00:28:05 +0000 (0:00:00.231) 0:00:16.530 **** 2026-02-04 00:28:16.416394 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:16.416404 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:16.416413 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:16.416423 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.416433 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:16.416442 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:16.416451 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:16.416461 | orchestrator | 2026-02-04 00:28:16.416471 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-04 00:28:16.416480 | orchestrator | Wednesday 04 February 2026 00:28:06 +0000 (0:00:00.594) 0:00:17.124 **** 2026-02-04 00:28:16.416494 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:28:16.416514 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:28:16.416539 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:28:16.416554 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:16.416570 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:28:16.416587 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:28:16.416603 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:28:16.416620 | orchestrator | 2026-02-04 00:28:16.416636 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-04 00:28:16.416648 | orchestrator | Wednesday 04 February 2026 00:28:06 +0000 (0:00:00.243) 0:00:17.368 **** 2026-02-04 00:28:16.416658 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:16.416667 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.416677 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:16.416686 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:16.416696 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:16.416705 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:16.416715 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:16.416724 | orchestrator | 2026-02-04 00:28:16.416734 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-04 00:28:16.416743 | orchestrator | Wednesday 04 February 2026 00:28:07 +0000 (0:00:00.574) 0:00:17.942 **** 2026-02-04 00:28:16.416753 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.416763 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:16.416772 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:16.416782 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:16.416791 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:16.416801 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:16.416813 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:16.416829 | orchestrator | 2026-02-04 00:28:16.416858 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-04 00:28:16.416875 | orchestrator | Wednesday 04 February 2026 00:28:08 +0000 (0:00:01.254) 0:00:19.197 **** 2026-02-04 00:28:16.416889 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.416905 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:16.416922 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:16.416939 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:16.416953 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:16.416968 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:16.416983 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:16.416999 | orchestrator | 2026-02-04 00:28:16.417015 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-04 00:28:16.417031 | orchestrator | Wednesday 04 February 2026 00:28:09 +0000 (0:00:01.293) 0:00:20.491 **** 2026-02-04 00:28:16.417072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:28:16.417091 | orchestrator | 2026-02-04 00:28:16.417107 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-04 00:28:16.417138 | orchestrator | Wednesday 04 February 2026 00:28:10 +0000 (0:00:00.348) 0:00:20.840 **** 2026-02-04 00:28:16.417154 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:16.417169 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:16.417186 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:16.417201 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:16.417217 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:16.417232 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:16.417249 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:16.417264 | orchestrator | 2026-02-04 00:28:16.417280 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-04 00:28:16.417296 | orchestrator | Wednesday 04 February 2026 00:28:11 +0000 (0:00:01.426) 0:00:22.266 **** 2026-02-04 00:28:16.417312 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:16.417368 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:16.417386 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:16.417403 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.417419 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:16.417436 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:16.417453 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:16.417469 | orchestrator | 2026-02-04 00:28:16.417487 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-04 00:28:16.417502 | orchestrator | Wednesday 04 February 2026 00:28:11 +0000 (0:00:00.257) 0:00:22.523 **** 2026-02-04 00:28:16.417518 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:16.417534 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:16.417551 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:16.417569 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.417586 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:16.417602 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:16.417618 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:16.417633 | orchestrator | 2026-02-04 00:28:16.417648 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-04 00:28:16.417664 | orchestrator | Wednesday 04 February 2026 00:28:12 +0000 (0:00:00.257) 0:00:22.781 **** 2026-02-04 00:28:16.417679 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:16.417696 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:16.417712 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:16.417728 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.417745 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:16.417761 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:16.417778 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:16.417795 | orchestrator | 2026-02-04 00:28:16.417813 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-04 00:28:16.417830 | orchestrator | Wednesday 04 February 2026 00:28:12 +0000 (0:00:00.275) 0:00:23.056 **** 2026-02-04 00:28:16.417881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:28:16.417902 | orchestrator | 2026-02-04 00:28:16.417920 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-04 00:28:16.417936 | orchestrator | Wednesday 04 February 2026 00:28:12 +0000 (0:00:00.327) 0:00:23.384 **** 2026-02-04 00:28:16.417953 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:16.417968 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:16.417984 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:16.418000 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.418169 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:16.418198 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:16.418216 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:16.418234 | orchestrator | 2026-02-04 00:28:16.418251 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-04 00:28:16.418267 | orchestrator | Wednesday 04 February 2026 00:28:13 +0000 (0:00:00.534) 0:00:23.918 **** 2026-02-04 00:28:16.418284 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:28:16.418319 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:28:16.418411 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:28:16.418427 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:16.418444 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:28:16.418461 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:28:16.418474 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:28:16.418486 | orchestrator | 2026-02-04 00:28:16.418500 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-04 00:28:16.418514 | orchestrator | Wednesday 04 February 2026 00:28:13 +0000 (0:00:00.259) 0:00:24.178 **** 2026-02-04 00:28:16.418525 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.418537 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:16.418551 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:16.418565 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:16.418579 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:16.418592 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:16.418606 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:16.418614 | orchestrator | 2026-02-04 00:28:16.418623 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-04 00:28:16.418631 | orchestrator | Wednesday 04 February 2026 00:28:14 +0000 (0:00:01.114) 0:00:25.292 **** 2026-02-04 00:28:16.418639 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:16.418647 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:16.418655 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.418662 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:16.418670 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:16.418678 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:16.418686 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:16.418694 | orchestrator | 2026-02-04 00:28:16.418702 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-04 00:28:16.418710 | orchestrator | Wednesday 04 February 2026 00:28:15 +0000 (0:00:00.707) 0:00:26.000 **** 2026-02-04 00:28:16.418717 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:16.418725 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:16.418733 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:16.418741 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:16.418765 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:59.437230 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:59.437348 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:59.437365 | orchestrator | 2026-02-04 00:28:59.437379 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-04 00:28:59.437392 | orchestrator | Wednesday 04 February 2026 00:28:16 +0000 (0:00:01.187) 0:00:27.187 **** 2026-02-04 00:28:59.437455 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:59.437471 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:59.437482 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:59.437493 | orchestrator | changed: [testbed-manager] 2026-02-04 00:28:59.437504 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:59.437515 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:59.437526 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:59.437537 | orchestrator | 2026-02-04 00:28:59.437548 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-04 00:28:59.437561 | orchestrator | Wednesday 04 February 2026 00:28:32 +0000 (0:00:16.163) 0:00:43.351 **** 2026-02-04 00:28:59.437590 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:59.437631 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:59.437655 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:59.437673 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:59.437690 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:59.437708 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:59.437726 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:59.437744 | orchestrator | 2026-02-04 00:28:59.437763 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-04 00:28:59.437783 | orchestrator | Wednesday 04 February 2026 00:28:32 +0000 (0:00:00.226) 0:00:43.577 **** 2026-02-04 00:28:59.437839 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:59.437859 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:59.437878 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:59.437897 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:59.437918 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:59.437937 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:59.437953 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:59.437965 | orchestrator | 2026-02-04 00:28:59.437975 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-04 00:28:59.437987 | orchestrator | Wednesday 04 February 2026 00:28:33 +0000 (0:00:00.231) 0:00:43.809 **** 2026-02-04 00:28:59.437997 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:59.438008 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:59.438099 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:59.438112 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:59.438123 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:59.438134 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:59.438145 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:59.438156 | orchestrator | 2026-02-04 00:28:59.438168 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-04 00:28:59.438179 | orchestrator | Wednesday 04 February 2026 00:28:33 +0000 (0:00:00.240) 0:00:44.050 **** 2026-02-04 00:28:59.438193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:28:59.438206 | orchestrator | 2026-02-04 00:28:59.438218 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-04 00:28:59.438229 | orchestrator | Wednesday 04 February 2026 00:28:33 +0000 (0:00:00.329) 0:00:44.379 **** 2026-02-04 00:28:59.438240 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:59.438251 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:59.438262 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:59.438272 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:59.438302 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:59.438314 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:59.438325 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:59.438335 | orchestrator | 2026-02-04 00:28:59.438347 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-04 00:28:59.438358 | orchestrator | Wednesday 04 February 2026 00:28:35 +0000 (0:00:01.780) 0:00:46.160 **** 2026-02-04 00:28:59.438369 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:59.438380 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:59.438391 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:59.438402 | orchestrator | changed: [testbed-manager] 2026-02-04 00:28:59.438467 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:59.438478 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:59.438489 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:59.438500 | orchestrator | 2026-02-04 00:28:59.438511 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-04 00:28:59.438522 | orchestrator | Wednesday 04 February 2026 00:28:36 +0000 (0:00:01.057) 0:00:47.217 **** 2026-02-04 00:28:59.438533 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:59.438544 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:59.438554 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:59.438565 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:59.438576 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:59.438586 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:59.438597 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:59.438607 | orchestrator | 2026-02-04 00:28:59.438618 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-04 00:28:59.438629 | orchestrator | Wednesday 04 February 2026 00:28:37 +0000 (0:00:00.828) 0:00:48.046 **** 2026-02-04 00:28:59.438646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:28:59.438670 | orchestrator | 2026-02-04 00:28:59.438681 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-04 00:28:59.438693 | orchestrator | Wednesday 04 February 2026 00:28:37 +0000 (0:00:00.327) 0:00:48.373 **** 2026-02-04 00:28:59.438704 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:59.438714 | orchestrator | changed: [testbed-manager] 2026-02-04 00:28:59.438725 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:59.438736 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:59.438747 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:59.438757 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:59.438768 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:59.438779 | orchestrator | 2026-02-04 00:28:59.438812 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-04 00:28:59.438824 | orchestrator | Wednesday 04 February 2026 00:28:38 +0000 (0:00:01.140) 0:00:49.514 **** 2026-02-04 00:28:59.438835 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:28:59.438846 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:28:59.438856 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:28:59.438867 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:59.438877 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:28:59.438888 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:28:59.438899 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:28:59.438910 | orchestrator | 2026-02-04 00:28:59.438921 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-04 00:28:59.438931 | orchestrator | Wednesday 04 February 2026 00:28:39 +0000 (0:00:00.259) 0:00:49.773 **** 2026-02-04 00:28:59.438942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:28:59.438953 | orchestrator | 2026-02-04 00:28:59.438964 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-04 00:28:59.438975 | orchestrator | Wednesday 04 February 2026 00:28:39 +0000 (0:00:00.349) 0:00:50.123 **** 2026-02-04 00:28:59.438986 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:59.438996 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:59.439007 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:59.439018 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:59.439028 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:59.439039 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:59.439049 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:59.439060 | orchestrator | 2026-02-04 00:28:59.439071 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-04 00:28:59.439081 | orchestrator | Wednesday 04 February 2026 00:28:41 +0000 (0:00:01.657) 0:00:51.780 **** 2026-02-04 00:28:59.439092 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:59.439103 | orchestrator | changed: [testbed-manager] 2026-02-04 00:28:59.439114 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:59.439124 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:59.439135 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:59.439145 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:59.439156 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:59.439167 | orchestrator | 2026-02-04 00:28:59.439177 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-04 00:28:59.439188 | orchestrator | Wednesday 04 February 2026 00:28:42 +0000 (0:00:01.181) 0:00:52.962 **** 2026-02-04 00:28:59.439199 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:59.439210 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:59.439221 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:59.439232 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:59.439242 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:59.439253 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:59.439273 | orchestrator | changed: [testbed-manager] 2026-02-04 00:28:59.439284 | orchestrator | 2026-02-04 00:28:59.439294 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-04 00:28:59.439312 | orchestrator | Wednesday 04 February 2026 00:28:56 +0000 (0:00:13.975) 0:01:06.937 **** 2026-02-04 00:28:59.439331 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:59.439350 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:59.439370 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:59.439388 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:59.439429 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:59.439441 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:59.439452 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:59.439463 | orchestrator | 2026-02-04 00:28:59.439474 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-04 00:28:59.439485 | orchestrator | Wednesday 04 February 2026 00:28:57 +0000 (0:00:01.404) 0:01:08.341 **** 2026-02-04 00:28:59.439495 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:59.439506 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:59.439517 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:59.439528 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:59.439538 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:59.439549 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:59.439560 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:59.439570 | orchestrator | 2026-02-04 00:28:59.439581 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-04 00:28:59.439592 | orchestrator | Wednesday 04 February 2026 00:28:58 +0000 (0:00:00.898) 0:01:09.240 **** 2026-02-04 00:28:59.439603 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:59.439613 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:59.439624 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:59.439635 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:59.439645 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:59.439656 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:59.439666 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:59.439677 | orchestrator | 2026-02-04 00:28:59.439688 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-04 00:28:59.439699 | orchestrator | Wednesday 04 February 2026 00:28:58 +0000 (0:00:00.263) 0:01:09.504 **** 2026-02-04 00:28:59.439710 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:59.439720 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:59.439731 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:59.439748 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:59.439759 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:59.439770 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:59.439780 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:59.439791 | orchestrator | 2026-02-04 00:28:59.439802 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-04 00:28:59.439813 | orchestrator | Wednesday 04 February 2026 00:28:59 +0000 (0:00:00.251) 0:01:09.755 **** 2026-02-04 00:28:59.439825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:28:59.439836 | orchestrator | 2026-02-04 00:28:59.439856 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-04 00:31:13.857319 | orchestrator | Wednesday 04 February 2026 00:28:59 +0000 (0:00:00.358) 0:01:10.113 **** 2026-02-04 00:31:13.857430 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:13.857448 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:13.857460 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:13.857471 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:13.857482 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:13.857494 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:13.857505 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:13.857516 | orchestrator | 2026-02-04 00:31:13.857528 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-04 00:31:13.857566 | orchestrator | Wednesday 04 February 2026 00:29:01 +0000 (0:00:01.753) 0:01:11.866 **** 2026-02-04 00:31:13.857578 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:13.857589 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:13.857600 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:13.857671 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:13.857684 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:13.857695 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:13.857706 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:13.857717 | orchestrator | 2026-02-04 00:31:13.857728 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-04 00:31:13.857740 | orchestrator | Wednesday 04 February 2026 00:29:01 +0000 (0:00:00.679) 0:01:12.546 **** 2026-02-04 00:31:13.857751 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:13.857762 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:13.857775 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:13.857787 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:13.857800 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:13.857812 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:13.857825 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:13.857837 | orchestrator | 2026-02-04 00:31:13.857850 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-04 00:31:13.857863 | orchestrator | Wednesday 04 February 2026 00:29:02 +0000 (0:00:00.265) 0:01:12.812 **** 2026-02-04 00:31:13.857876 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:13.857889 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:13.857901 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:13.857914 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:13.857927 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:13.857940 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:13.857951 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:13.857964 | orchestrator | 2026-02-04 00:31:13.857977 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-04 00:31:13.857990 | orchestrator | Wednesday 04 February 2026 00:29:03 +0000 (0:00:01.293) 0:01:14.106 **** 2026-02-04 00:31:13.858003 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:13.858080 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:13.858095 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:13.858108 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:13.858120 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:13.858134 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:13.858145 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:13.858156 | orchestrator | 2026-02-04 00:31:13.858167 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-04 00:31:13.858178 | orchestrator | Wednesday 04 February 2026 00:29:05 +0000 (0:00:01.872) 0:01:15.978 **** 2026-02-04 00:31:13.858189 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:13.858200 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:13.858211 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:13.858222 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:13.858233 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:13.858244 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:13.858255 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:13.858266 | orchestrator | 2026-02-04 00:31:13.858277 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-04 00:31:13.858288 | orchestrator | Wednesday 04 February 2026 00:29:07 +0000 (0:00:02.459) 0:01:18.438 **** 2026-02-04 00:31:13.858299 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:13.858310 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:13.858321 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:13.858331 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:13.858342 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:13.858353 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:13.858364 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:13.858384 | orchestrator | 2026-02-04 00:31:13.858396 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-04 00:31:13.858407 | orchestrator | Wednesday 04 February 2026 00:29:44 +0000 (0:00:37.140) 0:01:55.578 **** 2026-02-04 00:31:13.858418 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:13.858429 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:13.858440 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:13.858451 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:13.858462 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:13.858473 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:13.858484 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:13.858495 | orchestrator | 2026-02-04 00:31:13.858506 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-04 00:31:13.858517 | orchestrator | Wednesday 04 February 2026 00:30:56 +0000 (0:01:12.013) 0:03:07.592 **** 2026-02-04 00:31:13.858528 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:13.858539 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:13.858549 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:13.858560 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:13.858571 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:13.858583 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:13.858593 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:13.858604 | orchestrator | 2026-02-04 00:31:13.858659 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-04 00:31:13.858671 | orchestrator | Wednesday 04 February 2026 00:30:58 +0000 (0:00:01.698) 0:03:09.291 **** 2026-02-04 00:31:13.858682 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:13.858693 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:13.858703 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:13.858714 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:13.858725 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:13.858736 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:13.858747 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:13.858758 | orchestrator | 2026-02-04 00:31:13.858768 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-04 00:31:13.858780 | orchestrator | Wednesday 04 February 2026 00:31:10 +0000 (0:00:11.989) 0:03:21.280 **** 2026-02-04 00:31:13.858827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-04 00:31:13.858851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-04 00:31:13.858866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-04 00:31:13.858879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-04 00:31:13.858898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-04 00:31:13.858913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-04 00:31:13.858925 | orchestrator | 2026-02-04 00:31:13.858936 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-04 00:31:13.858947 | orchestrator | Wednesday 04 February 2026 00:31:11 +0000 (0:00:00.436) 0:03:21.717 **** 2026-02-04 00:31:13.858958 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 00:31:13.858969 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 00:31:13.858980 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:13.858991 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 00:31:13.859002 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:13.859012 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:13.859023 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 00:31:13.859034 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:13.859045 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 00:31:13.859066 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 00:31:13.859077 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 00:31:13.859088 | orchestrator | 2026-02-04 00:31:13.859099 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-04 00:31:13.859115 | orchestrator | Wednesday 04 February 2026 00:31:13 +0000 (0:00:02.752) 0:03:24.469 **** 2026-02-04 00:31:13.859126 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 00:31:13.859138 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 00:31:13.859149 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 00:31:13.859160 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 00:31:13.859171 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 00:31:13.859189 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 00:31:23.087730 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 00:31:23.087871 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 00:31:23.087898 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 00:31:23.087916 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 00:31:23.087929 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 00:31:23.087940 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 00:31:23.087951 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 00:31:23.087991 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 00:31:23.088003 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 00:31:23.088014 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 00:31:23.088025 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 00:31:23.088036 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 00:31:23.088047 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 00:31:23.088058 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 00:31:23.088069 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 00:31:23.088081 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 00:31:23.088098 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 00:31:23.088115 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 00:31:23.088126 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 00:31:23.088142 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 00:31:23.088161 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:23.088180 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 00:31:23.088197 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 00:31:23.088214 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 00:31:23.088232 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 00:31:23.088252 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 00:31:23.088273 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 00:31:23.088294 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 00:31:23.088313 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 00:31:23.088331 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:23.088344 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 00:31:23.088357 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 00:31:23.088370 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 00:31:23.088383 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 00:31:23.088395 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 00:31:23.088424 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 00:31:23.088438 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:23.088450 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:23.088463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-04 00:31:23.088476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-04 00:31:23.088489 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-04 00:31:23.088513 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-04 00:31:23.088526 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-04 00:31:23.088559 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-04 00:31:23.088574 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-04 00:31:23.088585 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-04 00:31:23.088596 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-04 00:31:23.088606 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-04 00:31:23.088617 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-04 00:31:23.088668 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-04 00:31:23.088681 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-04 00:31:23.088692 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-04 00:31:23.088703 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-04 00:31:23.088714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-04 00:31:23.088728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-04 00:31:23.088746 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-04 00:31:23.088764 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-04 00:31:23.088782 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-04 00:31:23.088801 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-04 00:31:23.088820 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-04 00:31:23.088838 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-04 00:31:23.088855 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-04 00:31:23.088866 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-04 00:31:23.088877 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-04 00:31:23.088888 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-04 00:31:23.088898 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-04 00:31:23.088909 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-04 00:31:23.088920 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-04 00:31:23.088931 | orchestrator | 2026-02-04 00:31:23.088942 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-04 00:31:23.088953 | orchestrator | Wednesday 04 February 2026 00:31:21 +0000 (0:00:08.132) 0:03:32.602 **** 2026-02-04 00:31:23.088964 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:31:23.088975 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:31:23.088986 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:31:23.088997 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:31:23.089017 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:31:23.089028 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:31:23.089039 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:31:23.089050 | orchestrator | 2026-02-04 00:31:23.089061 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-04 00:31:23.089072 | orchestrator | Wednesday 04 February 2026 00:31:22 +0000 (0:00:00.712) 0:03:33.315 **** 2026-02-04 00:31:23.089082 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:31:23.089094 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:23.089111 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:31:23.089123 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:31:23.089134 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:23.089145 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:23.089156 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:31:23.089167 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:23.089177 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:31:23.089188 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:31:23.089215 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:31:37.673991 | orchestrator | 2026-02-04 00:31:37.674179 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-04 00:31:37.674198 | orchestrator | Wednesday 04 February 2026 00:31:23 +0000 (0:00:00.482) 0:03:33.797 **** 2026-02-04 00:31:37.674210 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:31:37.674223 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:37.674235 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:31:37.674247 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:31:37.674258 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:37.674269 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:37.674280 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:31:37.674291 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:37.674302 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:31:37.674313 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:31:37.674324 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:31:37.674335 | orchestrator | 2026-02-04 00:31:37.674346 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-04 00:31:37.674357 | orchestrator | Wednesday 04 February 2026 00:31:23 +0000 (0:00:00.642) 0:03:34.440 **** 2026-02-04 00:31:37.674368 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 00:31:37.674379 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 00:31:37.674390 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:37.674401 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 00:31:37.674441 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:37.674453 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:37.674464 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 00:31:37.674475 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:37.674486 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-04 00:31:37.674497 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-04 00:31:37.674511 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-04 00:31:37.674523 | orchestrator | 2026-02-04 00:31:37.674536 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-04 00:31:37.674548 | orchestrator | Wednesday 04 February 2026 00:31:25 +0000 (0:00:01.606) 0:03:36.047 **** 2026-02-04 00:31:37.674562 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:37.674576 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:37.674589 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:37.674602 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:37.674615 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:37.674628 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:37.674695 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:37.674710 | orchestrator | 2026-02-04 00:31:37.674724 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-04 00:31:37.674736 | orchestrator | Wednesday 04 February 2026 00:31:25 +0000 (0:00:00.353) 0:03:36.400 **** 2026-02-04 00:31:37.674749 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:37.674763 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:37.674774 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:37.674787 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:37.674800 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:37.674813 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:37.674826 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:37.674837 | orchestrator | 2026-02-04 00:31:37.674848 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-04 00:31:37.674859 | orchestrator | Wednesday 04 February 2026 00:31:31 +0000 (0:00:06.067) 0:03:42.468 **** 2026-02-04 00:31:37.674870 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-04 00:31:37.674882 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-04 00:31:37.674893 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:37.674904 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-04 00:31:37.674915 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:37.674926 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-04 00:31:37.674936 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:37.674947 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:37.674958 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-04 00:31:37.674968 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-04 00:31:37.674979 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:37.674990 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:37.675001 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-04 00:31:37.675011 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:37.675022 | orchestrator | 2026-02-04 00:31:37.675033 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-04 00:31:37.675044 | orchestrator | Wednesday 04 February 2026 00:31:32 +0000 (0:00:00.365) 0:03:42.834 **** 2026-02-04 00:31:37.675055 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-04 00:31:37.675066 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-04 00:31:37.675077 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-04 00:31:37.675109 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-04 00:31:37.675121 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-04 00:31:37.675132 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-04 00:31:37.675152 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-04 00:31:37.675163 | orchestrator | 2026-02-04 00:31:37.675174 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-04 00:31:37.675185 | orchestrator | Wednesday 04 February 2026 00:31:33 +0000 (0:00:01.136) 0:03:43.971 **** 2026-02-04 00:31:37.675199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:31:37.675213 | orchestrator | 2026-02-04 00:31:37.675224 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-04 00:31:37.675235 | orchestrator | Wednesday 04 February 2026 00:31:33 +0000 (0:00:00.554) 0:03:44.525 **** 2026-02-04 00:31:37.675246 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:37.675257 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:37.675268 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:37.675279 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:37.675290 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:37.675301 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:37.675311 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:37.675322 | orchestrator | 2026-02-04 00:31:37.675333 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-04 00:31:37.675344 | orchestrator | Wednesday 04 February 2026 00:31:35 +0000 (0:00:01.362) 0:03:45.887 **** 2026-02-04 00:31:37.675355 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:37.675366 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:37.675377 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:37.675388 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:37.675398 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:37.675409 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:37.675420 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:37.675430 | orchestrator | 2026-02-04 00:31:37.675441 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-04 00:31:37.675452 | orchestrator | Wednesday 04 February 2026 00:31:35 +0000 (0:00:00.627) 0:03:46.515 **** 2026-02-04 00:31:37.675463 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:37.675492 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:37.675504 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:37.675515 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:37.675526 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:37.675537 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:37.675548 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:37.675558 | orchestrator | 2026-02-04 00:31:37.675582 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-04 00:31:37.675594 | orchestrator | Wednesday 04 February 2026 00:31:36 +0000 (0:00:00.649) 0:03:47.164 **** 2026-02-04 00:31:37.675605 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:37.675615 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:37.675626 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:37.675637 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:37.675697 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:37.675709 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:37.675719 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:37.675730 | orchestrator | 2026-02-04 00:31:37.675741 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-04 00:31:37.675752 | orchestrator | Wednesday 04 February 2026 00:31:37 +0000 (0:00:00.623) 0:03:47.788 **** 2026-02-04 00:31:37.675768 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163511.7759151, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:37.675795 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163521.0583603, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:37.675808 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163523.5081084, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:37.675844 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163566.2219512, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:43.369973 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163503.0368366, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:43.370200 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163522.721464, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:43.370227 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163505.373344, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:43.370247 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:43.370298 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:43.370333 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:43.370351 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:43.370392 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:43.370410 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:43.370427 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:31:43.370445 | orchestrator | 2026-02-04 00:31:43.370465 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-04 00:31:43.370485 | orchestrator | Wednesday 04 February 2026 00:31:38 +0000 (0:00:01.062) 0:03:48.850 **** 2026-02-04 00:31:43.370503 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:43.370521 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:43.370538 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:43.370563 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:43.370579 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:43.370596 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:43.370613 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:43.370630 | orchestrator | 2026-02-04 00:31:43.370715 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-04 00:31:43.370737 | orchestrator | Wednesday 04 February 2026 00:31:39 +0000 (0:00:01.266) 0:03:50.117 **** 2026-02-04 00:31:43.370751 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:43.370766 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:43.370782 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:43.370798 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:43.370814 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:43.370830 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:43.370844 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:43.370858 | orchestrator | 2026-02-04 00:31:43.370870 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-04 00:31:43.370883 | orchestrator | Wednesday 04 February 2026 00:31:40 +0000 (0:00:01.240) 0:03:51.357 **** 2026-02-04 00:31:43.370895 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:43.370907 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:43.370919 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:43.370933 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:43.370946 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:43.370958 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:43.370970 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:43.370983 | orchestrator | 2026-02-04 00:31:43.370995 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-04 00:31:43.371017 | orchestrator | Wednesday 04 February 2026 00:31:41 +0000 (0:00:01.172) 0:03:52.530 **** 2026-02-04 00:31:43.371030 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:43.371043 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:43.371051 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:43.371059 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:43.371067 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:43.371074 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:43.371082 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:43.371090 | orchestrator | 2026-02-04 00:31:43.371098 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-04 00:31:43.371106 | orchestrator | Wednesday 04 February 2026 00:31:42 +0000 (0:00:00.296) 0:03:52.826 **** 2026-02-04 00:31:43.371114 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:43.371124 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:43.371131 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:43.371139 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:43.371147 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:43.371155 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:43.371162 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:43.371170 | orchestrator | 2026-02-04 00:31:43.371178 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-04 00:31:43.371186 | orchestrator | Wednesday 04 February 2026 00:31:42 +0000 (0:00:00.772) 0:03:53.599 **** 2026-02-04 00:31:43.371196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:31:43.371207 | orchestrator | 2026-02-04 00:31:43.371215 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-04 00:31:43.371232 | orchestrator | Wednesday 04 February 2026 00:31:43 +0000 (0:00:00.449) 0:03:54.049 **** 2026-02-04 00:33:01.868554 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:01.868680 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:01.868710 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:01.868812 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:01.868826 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:01.868836 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:01.868847 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:01.868860 | orchestrator | 2026-02-04 00:33:01.868872 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-04 00:33:01.868885 | orchestrator | Wednesday 04 February 2026 00:31:52 +0000 (0:00:08.657) 0:04:02.706 **** 2026-02-04 00:33:01.868896 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:01.868907 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:01.868918 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:01.868929 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:01.868940 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:01.868950 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:01.868961 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:01.868972 | orchestrator | 2026-02-04 00:33:01.868983 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-04 00:33:01.868994 | orchestrator | Wednesday 04 February 2026 00:31:53 +0000 (0:00:01.337) 0:04:04.043 **** 2026-02-04 00:33:01.869004 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:01.869015 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:01.869026 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:01.869036 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:01.869047 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:01.869057 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:01.869068 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:01.869080 | orchestrator | 2026-02-04 00:33:01.869093 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-04 00:33:01.869105 | orchestrator | Wednesday 04 February 2026 00:31:54 +0000 (0:00:01.028) 0:04:05.072 **** 2026-02-04 00:33:01.869118 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:01.869130 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:01.869143 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:01.869156 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:01.869168 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:01.869180 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:01.869193 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:01.869206 | orchestrator | 2026-02-04 00:33:01.869219 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-04 00:33:01.869232 | orchestrator | Wednesday 04 February 2026 00:31:54 +0000 (0:00:00.340) 0:04:05.412 **** 2026-02-04 00:33:01.869245 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:01.869258 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:01.869270 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:01.869283 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:01.869296 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:01.869308 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:01.869321 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:01.869333 | orchestrator | 2026-02-04 00:33:01.869346 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-04 00:33:01.869359 | orchestrator | Wednesday 04 February 2026 00:31:55 +0000 (0:00:00.377) 0:04:05.790 **** 2026-02-04 00:33:01.869371 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:01.869383 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:01.869395 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:01.869409 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:01.869422 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:01.869435 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:01.869446 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:01.869456 | orchestrator | 2026-02-04 00:33:01.869467 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-04 00:33:01.869478 | orchestrator | Wednesday 04 February 2026 00:31:55 +0000 (0:00:00.375) 0:04:06.165 **** 2026-02-04 00:33:01.869489 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:01.869500 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:01.869510 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:01.869530 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:01.869541 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:01.869551 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:01.869562 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:01.869573 | orchestrator | 2026-02-04 00:33:01.869584 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-04 00:33:01.869595 | orchestrator | Wednesday 04 February 2026 00:32:00 +0000 (0:00:05.076) 0:04:11.241 **** 2026-02-04 00:33:01.869608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:33:01.869622 | orchestrator | 2026-02-04 00:33:01.869633 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-04 00:33:01.869644 | orchestrator | Wednesday 04 February 2026 00:32:01 +0000 (0:00:00.474) 0:04:11.716 **** 2026-02-04 00:33:01.869655 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-04 00:33:01.869666 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-04 00:33:01.869677 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-04 00:33:01.869688 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-04 00:33:01.869699 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:33:01.869749 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-04 00:33:01.869762 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-04 00:33:01.869773 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:33:01.869784 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-04 00:33:01.869794 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-04 00:33:01.869805 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:33:01.869816 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:33:01.869827 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-04 00:33:01.869838 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-04 00:33:01.869849 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-04 00:33:01.869860 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-04 00:33:01.869890 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:33:01.869902 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:33:01.869913 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-04 00:33:01.869924 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-04 00:33:01.869935 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:33:01.869946 | orchestrator | 2026-02-04 00:33:01.869957 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-04 00:33:01.869968 | orchestrator | Wednesday 04 February 2026 00:32:01 +0000 (0:00:00.390) 0:04:12.108 **** 2026-02-04 00:33:01.869979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:33:01.869991 | orchestrator | 2026-02-04 00:33:01.870002 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-04 00:33:01.870012 | orchestrator | Wednesday 04 February 2026 00:32:01 +0000 (0:00:00.423) 0:04:12.532 **** 2026-02-04 00:33:01.870106 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-04 00:33:01.870118 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-04 00:33:01.870129 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:33:01.870140 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-04 00:33:01.870151 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:33:01.870162 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:33:01.870184 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-04 00:33:01.870195 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-04 00:33:01.870206 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:33:01.870235 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:33:01.870247 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-04 00:33:01.870258 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:33:01.870268 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-04 00:33:01.870279 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:33:01.870290 | orchestrator | 2026-02-04 00:33:01.870301 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-04 00:33:01.870312 | orchestrator | Wednesday 04 February 2026 00:32:02 +0000 (0:00:00.346) 0:04:12.879 **** 2026-02-04 00:33:01.870323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:33:01.870334 | orchestrator | 2026-02-04 00:33:01.870345 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-04 00:33:01.870356 | orchestrator | Wednesday 04 February 2026 00:32:02 +0000 (0:00:00.466) 0:04:13.346 **** 2026-02-04 00:33:01.870366 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:01.870378 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:01.870389 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:01.870399 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:01.870410 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:01.870421 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:01.870432 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:01.870442 | orchestrator | 2026-02-04 00:33:01.870453 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-04 00:33:01.870464 | orchestrator | Wednesday 04 February 2026 00:32:37 +0000 (0:00:34.470) 0:04:47.816 **** 2026-02-04 00:33:01.870475 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:01.870486 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:01.870497 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:01.870507 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:01.870518 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:01.870529 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:01.870545 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:01.870557 | orchestrator | 2026-02-04 00:33:01.870568 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-04 00:33:01.870579 | orchestrator | Wednesday 04 February 2026 00:32:45 +0000 (0:00:08.319) 0:04:56.135 **** 2026-02-04 00:33:01.870589 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:01.870600 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:01.870611 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:01.870622 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:01.870632 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:01.870643 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:01.870654 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:01.870664 | orchestrator | 2026-02-04 00:33:01.870675 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-04 00:33:01.870686 | orchestrator | Wednesday 04 February 2026 00:32:53 +0000 (0:00:08.165) 0:05:04.300 **** 2026-02-04 00:33:01.870697 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:01.870708 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:01.870719 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:01.870729 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:01.870758 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:01.870769 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:01.870779 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:01.870790 | orchestrator | 2026-02-04 00:33:01.870801 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-04 00:33:01.870820 | orchestrator | Wednesday 04 February 2026 00:32:55 +0000 (0:00:01.984) 0:05:06.285 **** 2026-02-04 00:33:01.870831 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:01.870841 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:01.870852 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:01.870863 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:01.870874 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:01.870885 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:01.870896 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:01.870907 | orchestrator | 2026-02-04 00:33:01.870927 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-04 00:33:13.819049 | orchestrator | Wednesday 04 February 2026 00:33:01 +0000 (0:00:06.261) 0:05:12.547 **** 2026-02-04 00:33:13.819134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:33:13.819144 | orchestrator | 2026-02-04 00:33:13.819151 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-04 00:33:13.819158 | orchestrator | Wednesday 04 February 2026 00:33:02 +0000 (0:00:00.482) 0:05:13.029 **** 2026-02-04 00:33:13.819163 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:13.819170 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:13.819176 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:13.819181 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:13.819186 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:13.819192 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:13.819197 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:13.819203 | orchestrator | 2026-02-04 00:33:13.819208 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-04 00:33:13.819214 | orchestrator | Wednesday 04 February 2026 00:33:03 +0000 (0:00:00.763) 0:05:13.793 **** 2026-02-04 00:33:13.819219 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:13.819225 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:13.819231 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:13.819236 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:13.819242 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:13.819247 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:13.819252 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:13.819257 | orchestrator | 2026-02-04 00:33:13.819263 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-04 00:33:13.819268 | orchestrator | Wednesday 04 February 2026 00:33:04 +0000 (0:00:01.693) 0:05:15.486 **** 2026-02-04 00:33:13.819274 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:13.819279 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:13.819285 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:13.819290 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:13.819295 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:13.819301 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:13.819306 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:13.819311 | orchestrator | 2026-02-04 00:33:13.819317 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-04 00:33:13.819322 | orchestrator | Wednesday 04 February 2026 00:33:05 +0000 (0:00:00.823) 0:05:16.309 **** 2026-02-04 00:33:13.819327 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:33:13.819333 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:33:13.819338 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:33:13.819343 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:33:13.819349 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:33:13.819354 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:33:13.819359 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:33:13.819365 | orchestrator | 2026-02-04 00:33:13.819370 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-04 00:33:13.819398 | orchestrator | Wednesday 04 February 2026 00:33:05 +0000 (0:00:00.302) 0:05:16.612 **** 2026-02-04 00:33:13.819403 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:33:13.819409 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:33:13.819414 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:33:13.819419 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:33:13.819424 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:33:13.819430 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:33:13.819435 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:33:13.819440 | orchestrator | 2026-02-04 00:33:13.819446 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-04 00:33:13.819451 | orchestrator | Wednesday 04 February 2026 00:33:06 +0000 (0:00:00.453) 0:05:17.066 **** 2026-02-04 00:33:13.819456 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:13.819462 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:13.819467 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:13.819472 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:13.819478 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:13.819493 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:13.819499 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:13.819504 | orchestrator | 2026-02-04 00:33:13.819509 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-04 00:33:13.819515 | orchestrator | Wednesday 04 February 2026 00:33:06 +0000 (0:00:00.352) 0:05:17.418 **** 2026-02-04 00:33:13.819520 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:33:13.819526 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:33:13.819531 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:33:13.819536 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:33:13.819541 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:33:13.819547 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:33:13.819552 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:33:13.819557 | orchestrator | 2026-02-04 00:33:13.819563 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-04 00:33:13.819569 | orchestrator | Wednesday 04 February 2026 00:33:07 +0000 (0:00:00.306) 0:05:17.725 **** 2026-02-04 00:33:13.819574 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:13.819580 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:13.819586 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:13.819593 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:13.819599 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:13.819605 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:13.819611 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:13.819617 | orchestrator | 2026-02-04 00:33:13.819623 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-04 00:33:13.819630 | orchestrator | Wednesday 04 February 2026 00:33:07 +0000 (0:00:00.349) 0:05:18.074 **** 2026-02-04 00:33:13.819636 | orchestrator | ok: [testbed-node-3] =>  2026-02-04 00:33:13.819642 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:33:13.819649 | orchestrator | ok: [testbed-node-4] =>  2026-02-04 00:33:13.819655 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:33:13.819661 | orchestrator | ok: [testbed-node-5] =>  2026-02-04 00:33:13.819667 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:33:13.819674 | orchestrator | ok: [testbed-manager] =>  2026-02-04 00:33:13.819680 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:33:13.819697 | orchestrator | ok: [testbed-node-0] =>  2026-02-04 00:33:13.819703 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:33:13.819709 | orchestrator | ok: [testbed-node-1] =>  2026-02-04 00:33:13.819715 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:33:13.819721 | orchestrator | ok: [testbed-node-2] =>  2026-02-04 00:33:13.819728 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:33:13.819734 | orchestrator | 2026-02-04 00:33:13.819793 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-04 00:33:13.819803 | orchestrator | Wednesday 04 February 2026 00:33:07 +0000 (0:00:00.313) 0:05:18.388 **** 2026-02-04 00:33:13.819816 | orchestrator | ok: [testbed-node-3] =>  2026-02-04 00:33:13.819823 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:33:13.819829 | orchestrator | ok: [testbed-node-4] =>  2026-02-04 00:33:13.819836 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:33:13.819842 | orchestrator | ok: [testbed-node-5] =>  2026-02-04 00:33:13.819848 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:33:13.819855 | orchestrator | ok: [testbed-manager] =>  2026-02-04 00:33:13.819861 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:33:13.819868 | orchestrator | ok: [testbed-node-0] =>  2026-02-04 00:33:13.819874 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:33:13.819880 | orchestrator | ok: [testbed-node-1] =>  2026-02-04 00:33:13.819886 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:33:13.819892 | orchestrator | ok: [testbed-node-2] =>  2026-02-04 00:33:13.819899 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:33:13.819905 | orchestrator | 2026-02-04 00:33:13.819911 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-04 00:33:13.819917 | orchestrator | Wednesday 04 February 2026 00:33:08 +0000 (0:00:00.353) 0:05:18.741 **** 2026-02-04 00:33:13.819924 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:33:13.819930 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:33:13.819937 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:33:13.819944 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:33:13.819949 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:33:13.819955 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:33:13.819960 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:33:13.819966 | orchestrator | 2026-02-04 00:33:13.819975 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-04 00:33:13.819984 | orchestrator | Wednesday 04 February 2026 00:33:08 +0000 (0:00:00.395) 0:05:19.137 **** 2026-02-04 00:33:13.819993 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:33:13.820002 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:33:13.820011 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:33:13.820019 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:33:13.820024 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:33:13.820029 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:33:13.820035 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:33:13.820040 | orchestrator | 2026-02-04 00:33:13.820045 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-04 00:33:13.820051 | orchestrator | Wednesday 04 February 2026 00:33:08 +0000 (0:00:00.439) 0:05:19.577 **** 2026-02-04 00:33:13.820058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:33:13.820065 | orchestrator | 2026-02-04 00:33:13.820071 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-04 00:33:13.820076 | orchestrator | Wednesday 04 February 2026 00:33:09 +0000 (0:00:00.498) 0:05:20.076 **** 2026-02-04 00:33:13.820081 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:13.820087 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:13.820092 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:13.820097 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:13.820103 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:13.820108 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:13.820113 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:13.820118 | orchestrator | 2026-02-04 00:33:13.820124 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-04 00:33:13.820129 | orchestrator | Wednesday 04 February 2026 00:33:10 +0000 (0:00:00.954) 0:05:21.030 **** 2026-02-04 00:33:13.820139 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:13.820144 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:13.820150 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:13.820155 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:13.820165 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:13.820171 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:13.820176 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:13.820181 | orchestrator | 2026-02-04 00:33:13.820187 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-04 00:33:13.820193 | orchestrator | Wednesday 04 February 2026 00:33:13 +0000 (0:00:03.040) 0:05:24.071 **** 2026-02-04 00:33:13.820199 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-04 00:33:13.820205 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-04 00:33:13.820210 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-04 00:33:13.820216 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:33:13.820221 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-04 00:33:13.820226 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-04 00:33:13.820231 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-04 00:33:13.820237 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-04 00:33:13.820242 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-04 00:33:13.820247 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-04 00:33:13.820253 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:33:13.820258 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-04 00:33:13.820264 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-04 00:33:13.820269 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-04 00:33:13.820274 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:33:13.820279 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-04 00:33:13.820290 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-04 00:34:17.200436 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:34:17.200552 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-04 00:34:17.200563 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-04 00:34:17.200571 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-04 00:34:17.200578 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-04 00:34:17.200616 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:34:17.200624 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:34:17.200630 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-04 00:34:17.200638 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-04 00:34:17.200646 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-04 00:34:17.200653 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:34:17.200660 | orchestrator | 2026-02-04 00:34:17.200669 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-04 00:34:17.200678 | orchestrator | Wednesday 04 February 2026 00:33:14 +0000 (0:00:00.827) 0:05:24.899 **** 2026-02-04 00:34:17.200685 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:17.200693 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:17.200700 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:17.200707 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:17.200713 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:17.200720 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:17.200727 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:17.200734 | orchestrator | 2026-02-04 00:34:17.200741 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-04 00:34:17.200747 | orchestrator | Wednesday 04 February 2026 00:33:20 +0000 (0:00:06.592) 0:05:31.492 **** 2026-02-04 00:34:17.200753 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:17.200760 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:17.200766 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:17.200815 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:17.200823 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:17.200852 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:17.200859 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:17.200866 | orchestrator | 2026-02-04 00:34:17.200873 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-04 00:34:17.200880 | orchestrator | Wednesday 04 February 2026 00:33:21 +0000 (0:00:01.098) 0:05:32.590 **** 2026-02-04 00:34:17.200887 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:17.200894 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:17.200902 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:17.200909 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:17.200916 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:17.200923 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:17.200929 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:17.200935 | orchestrator | 2026-02-04 00:34:17.200941 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-04 00:34:17.200948 | orchestrator | Wednesday 04 February 2026 00:33:31 +0000 (0:00:09.114) 0:05:41.705 **** 2026-02-04 00:34:17.200954 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:17.200960 | orchestrator | changed: [testbed-manager] 2026-02-04 00:34:17.200966 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:17.200972 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:17.200978 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:17.200985 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:17.200992 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:17.200998 | orchestrator | 2026-02-04 00:34:17.201004 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-04 00:34:17.201011 | orchestrator | Wednesday 04 February 2026 00:33:34 +0000 (0:00:03.370) 0:05:45.076 **** 2026-02-04 00:34:17.201018 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:17.201025 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:17.201031 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:17.201037 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:17.201043 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:17.201049 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:17.201055 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:17.201062 | orchestrator | 2026-02-04 00:34:17.201081 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-04 00:34:17.201088 | orchestrator | Wednesday 04 February 2026 00:33:35 +0000 (0:00:01.554) 0:05:46.630 **** 2026-02-04 00:34:17.201094 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:17.201100 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:17.201106 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:17.201112 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:17.201118 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:17.201125 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:17.201131 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:17.201138 | orchestrator | 2026-02-04 00:34:17.201144 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-04 00:34:17.201151 | orchestrator | Wednesday 04 February 2026 00:33:37 +0000 (0:00:01.381) 0:05:48.012 **** 2026-02-04 00:34:17.201158 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:34:17.201166 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:34:17.201173 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:34:17.201178 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:34:17.201182 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:34:17.201187 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:34:17.201193 | orchestrator | changed: [testbed-manager] 2026-02-04 00:34:17.201199 | orchestrator | 2026-02-04 00:34:17.201206 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-04 00:34:17.201213 | orchestrator | Wednesday 04 February 2026 00:33:38 +0000 (0:00:00.865) 0:05:48.877 **** 2026-02-04 00:34:17.201220 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:17.201227 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:17.201233 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:17.201244 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:17.201248 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:17.201252 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:17.201255 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:17.201259 | orchestrator | 2026-02-04 00:34:17.201263 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-04 00:34:17.201281 | orchestrator | Wednesday 04 February 2026 00:33:48 +0000 (0:00:10.018) 0:05:58.896 **** 2026-02-04 00:34:17.201285 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:17.201289 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:17.201292 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:17.201296 | orchestrator | changed: [testbed-manager] 2026-02-04 00:34:17.201300 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:17.201303 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:17.201307 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:17.201311 | orchestrator | 2026-02-04 00:34:17.201315 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-04 00:34:17.201318 | orchestrator | Wednesday 04 February 2026 00:33:49 +0000 (0:00:00.959) 0:05:59.856 **** 2026-02-04 00:34:17.201322 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:17.201326 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:17.201330 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:17.201333 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:17.201337 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:17.201341 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:17.201344 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:17.201348 | orchestrator | 2026-02-04 00:34:17.201352 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-04 00:34:17.201355 | orchestrator | Wednesday 04 February 2026 00:33:59 +0000 (0:00:10.235) 0:06:10.092 **** 2026-02-04 00:34:17.201360 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:17.201364 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:17.201368 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:17.201372 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:17.201375 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:17.201379 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:17.201383 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:17.201386 | orchestrator | 2026-02-04 00:34:17.201390 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-04 00:34:17.201394 | orchestrator | Wednesday 04 February 2026 00:34:10 +0000 (0:00:11.071) 0:06:21.164 **** 2026-02-04 00:34:17.201398 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-04 00:34:17.201401 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-04 00:34:17.201406 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-04 00:34:17.201409 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-04 00:34:17.201413 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-04 00:34:17.201417 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-04 00:34:17.201421 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-04 00:34:17.201424 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-04 00:34:17.201428 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-04 00:34:17.201432 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-04 00:34:17.201435 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-04 00:34:17.201439 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-04 00:34:17.201443 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-04 00:34:17.201446 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-04 00:34:17.201450 | orchestrator | 2026-02-04 00:34:17.201454 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-04 00:34:17.201458 | orchestrator | Wednesday 04 February 2026 00:34:11 +0000 (0:00:01.249) 0:06:22.413 **** 2026-02-04 00:34:17.201465 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:34:17.201468 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:34:17.201472 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:34:17.201476 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:34:17.201480 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:34:17.201483 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:34:17.201487 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:34:17.201491 | orchestrator | 2026-02-04 00:34:17.201494 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-04 00:34:17.201498 | orchestrator | Wednesday 04 February 2026 00:34:12 +0000 (0:00:00.583) 0:06:22.996 **** 2026-02-04 00:34:17.201502 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:17.201506 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:17.201509 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:17.201513 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:17.201517 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:17.201520 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:17.201524 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:17.201528 | orchestrator | 2026-02-04 00:34:17.201532 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-04 00:34:17.201536 | orchestrator | Wednesday 04 February 2026 00:34:16 +0000 (0:00:03.832) 0:06:26.829 **** 2026-02-04 00:34:17.201540 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:34:17.201544 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:34:17.201548 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:34:17.201551 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:34:17.201555 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:34:17.201559 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:34:17.201562 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:34:17.201566 | orchestrator | 2026-02-04 00:34:17.201570 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-04 00:34:17.201574 | orchestrator | Wednesday 04 February 2026 00:34:16 +0000 (0:00:00.740) 0:06:27.570 **** 2026-02-04 00:34:17.201578 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-04 00:34:17.201610 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-04 00:34:17.201614 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:34:17.201618 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-04 00:34:17.201622 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-04 00:34:17.201625 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:34:17.201629 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-04 00:34:17.201633 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-04 00:34:17.201637 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:34:17.201643 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-04 00:34:37.060557 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-04 00:34:37.060686 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:34:37.060709 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-04 00:34:37.060726 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-04 00:34:37.060736 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:34:37.060745 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-04 00:34:37.060754 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-04 00:34:37.060763 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:34:37.060816 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-04 00:34:37.060832 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-04 00:34:37.060846 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:34:37.060860 | orchestrator | 2026-02-04 00:34:37.060875 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-04 00:34:37.060923 | orchestrator | Wednesday 04 February 2026 00:34:17 +0000 (0:00:00.597) 0:06:28.168 **** 2026-02-04 00:34:37.060938 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:34:37.060951 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:34:37.060966 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:34:37.060981 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:34:37.060995 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:34:37.061011 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:34:37.061025 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:34:37.061041 | orchestrator | 2026-02-04 00:34:37.061055 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-04 00:34:37.061066 | orchestrator | Wednesday 04 February 2026 00:34:18 +0000 (0:00:00.539) 0:06:28.707 **** 2026-02-04 00:34:37.061077 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:34:37.061087 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:34:37.061097 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:34:37.061108 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:34:37.061118 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:34:37.061128 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:34:37.061139 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:34:37.061150 | orchestrator | 2026-02-04 00:34:37.061160 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-04 00:34:37.061170 | orchestrator | Wednesday 04 February 2026 00:34:18 +0000 (0:00:00.554) 0:06:29.261 **** 2026-02-04 00:34:37.061181 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:34:37.061192 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:34:37.061202 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:34:37.061211 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:34:37.061226 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:34:37.061241 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:34:37.061256 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:34:37.061270 | orchestrator | 2026-02-04 00:34:37.061285 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-04 00:34:37.061299 | orchestrator | Wednesday 04 February 2026 00:34:19 +0000 (0:00:00.734) 0:06:29.996 **** 2026-02-04 00:34:37.061314 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:37.061330 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:37.061345 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:37.061360 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:37.061374 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:37.061388 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:37.061402 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:37.061415 | orchestrator | 2026-02-04 00:34:37.061429 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-04 00:34:37.061443 | orchestrator | Wednesday 04 February 2026 00:34:21 +0000 (0:00:01.890) 0:06:31.886 **** 2026-02-04 00:34:37.061459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:34:37.061541 | orchestrator | 2026-02-04 00:34:37.061577 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-04 00:34:37.061593 | orchestrator | Wednesday 04 February 2026 00:34:22 +0000 (0:00:00.943) 0:06:32.830 **** 2026-02-04 00:34:37.061608 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:37.061622 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:37.061635 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:37.061650 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:37.061666 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:37.061681 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:37.061696 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:37.061712 | orchestrator | 2026-02-04 00:34:37.061726 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-04 00:34:37.061822 | orchestrator | Wednesday 04 February 2026 00:34:23 +0000 (0:00:00.890) 0:06:33.720 **** 2026-02-04 00:34:37.061853 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:37.061868 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:37.061883 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:37.061899 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:37.061915 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:37.061931 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:37.061947 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:37.061963 | orchestrator | 2026-02-04 00:34:37.061980 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-04 00:34:37.061997 | orchestrator | Wednesday 04 February 2026 00:34:24 +0000 (0:00:01.139) 0:06:34.860 **** 2026-02-04 00:34:37.062012 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:37.062166 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:37.062183 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:37.062197 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:37.062212 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:37.062226 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:37.062240 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:37.062255 | orchestrator | 2026-02-04 00:34:37.062273 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-04 00:34:37.062319 | orchestrator | Wednesday 04 February 2026 00:34:25 +0000 (0:00:01.400) 0:06:36.260 **** 2026-02-04 00:34:37.062337 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:34:37.062353 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:37.062368 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:37.062383 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:37.062397 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:37.062412 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:37.062425 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:37.062441 | orchestrator | 2026-02-04 00:34:37.062456 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-04 00:34:37.062472 | orchestrator | Wednesday 04 February 2026 00:34:27 +0000 (0:00:01.443) 0:06:37.704 **** 2026-02-04 00:34:37.062487 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:37.062501 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:37.062515 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:37.062530 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:37.062545 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:37.062560 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:37.062575 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:37.062591 | orchestrator | 2026-02-04 00:34:37.062606 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-04 00:34:37.062622 | orchestrator | Wednesday 04 February 2026 00:34:28 +0000 (0:00:01.410) 0:06:39.114 **** 2026-02-04 00:34:37.062638 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:37.062654 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:37.062669 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:37.062684 | orchestrator | changed: [testbed-manager] 2026-02-04 00:34:37.062699 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:37.062713 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:37.062728 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:37.062744 | orchestrator | 2026-02-04 00:34:37.062761 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-04 00:34:37.062806 | orchestrator | Wednesday 04 February 2026 00:34:29 +0000 (0:00:01.431) 0:06:40.546 **** 2026-02-04 00:34:37.062823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:34:37.062842 | orchestrator | 2026-02-04 00:34:37.062858 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-04 00:34:37.062875 | orchestrator | Wednesday 04 February 2026 00:34:31 +0000 (0:00:01.147) 0:06:41.693 **** 2026-02-04 00:34:37.062915 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:37.062932 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:37.062948 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:37.062965 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:37.062981 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:37.062995 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:37.063011 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:37.063026 | orchestrator | 2026-02-04 00:34:37.063041 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-04 00:34:37.063057 | orchestrator | Wednesday 04 February 2026 00:34:32 +0000 (0:00:01.322) 0:06:43.016 **** 2026-02-04 00:34:37.063073 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:37.063088 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:37.063103 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:37.063119 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:37.063134 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:37.063151 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:37.063167 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:37.063184 | orchestrator | 2026-02-04 00:34:37.063200 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-04 00:34:37.063217 | orchestrator | Wednesday 04 February 2026 00:34:33 +0000 (0:00:01.136) 0:06:44.153 **** 2026-02-04 00:34:37.063234 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:37.063250 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:37.063265 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:37.063281 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:37.063297 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:37.063313 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:37.063329 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:37.063345 | orchestrator | 2026-02-04 00:34:37.063362 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-04 00:34:37.063379 | orchestrator | Wednesday 04 February 2026 00:34:34 +0000 (0:00:01.351) 0:06:45.505 **** 2026-02-04 00:34:37.063396 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:37.063412 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:37.063429 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:37.063445 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:37.063462 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:37.063478 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:37.063495 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:37.063511 | orchestrator | 2026-02-04 00:34:37.063528 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-04 00:34:37.063543 | orchestrator | Wednesday 04 February 2026 00:34:35 +0000 (0:00:01.131) 0:06:46.636 **** 2026-02-04 00:34:37.063567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:34:37.063582 | orchestrator | 2026-02-04 00:34:37.063596 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:34:37.063612 | orchestrator | Wednesday 04 February 2026 00:34:36 +0000 (0:00:00.949) 0:06:47.586 **** 2026-02-04 00:34:37.063627 | orchestrator | 2026-02-04 00:34:37.063641 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:34:37.063656 | orchestrator | Wednesday 04 February 2026 00:34:36 +0000 (0:00:00.043) 0:06:47.630 **** 2026-02-04 00:34:37.063672 | orchestrator | 2026-02-04 00:34:37.063688 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:34:37.063702 | orchestrator | Wednesday 04 February 2026 00:34:36 +0000 (0:00:00.051) 0:06:47.681 **** 2026-02-04 00:34:37.063719 | orchestrator | 2026-02-04 00:34:37.063736 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:34:37.063833 | orchestrator | Wednesday 04 February 2026 00:34:37 +0000 (0:00:00.056) 0:06:47.737 **** 2026-02-04 00:35:04.433811 | orchestrator | 2026-02-04 00:35:04.433928 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:35:04.433940 | orchestrator | Wednesday 04 February 2026 00:34:37 +0000 (0:00:00.042) 0:06:47.779 **** 2026-02-04 00:35:04.433947 | orchestrator | 2026-02-04 00:35:04.433954 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:35:04.433960 | orchestrator | Wednesday 04 February 2026 00:34:37 +0000 (0:00:00.051) 0:06:47.831 **** 2026-02-04 00:35:04.433967 | orchestrator | 2026-02-04 00:35:04.433974 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:35:04.433981 | orchestrator | Wednesday 04 February 2026 00:34:37 +0000 (0:00:00.048) 0:06:47.879 **** 2026-02-04 00:35:04.433987 | orchestrator | 2026-02-04 00:35:04.433994 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-04 00:35:04.434001 | orchestrator | Wednesday 04 February 2026 00:34:37 +0000 (0:00:00.045) 0:06:47.925 **** 2026-02-04 00:35:04.434008 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:04.434063 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:04.434071 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:04.434078 | orchestrator | 2026-02-04 00:35:04.434085 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-04 00:35:04.434091 | orchestrator | Wednesday 04 February 2026 00:34:38 +0000 (0:00:01.430) 0:06:49.356 **** 2026-02-04 00:35:04.434098 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:35:04.434106 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:35:04.434113 | orchestrator | changed: [testbed-manager] 2026-02-04 00:35:04.434120 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:35:04.434126 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:35:04.434133 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:35:04.434140 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:35:04.434146 | orchestrator | 2026-02-04 00:35:04.434153 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-04 00:35:04.434160 | orchestrator | Wednesday 04 February 2026 00:34:40 +0000 (0:00:01.501) 0:06:50.857 **** 2026-02-04 00:35:04.434167 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:35:04.434173 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:35:04.434180 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:35:04.434191 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:35:04.434202 | orchestrator | changed: [testbed-manager] 2026-02-04 00:35:04.434211 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:35:04.434228 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:35:04.434239 | orchestrator | 2026-02-04 00:35:04.434250 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-04 00:35:04.434261 | orchestrator | Wednesday 04 February 2026 00:34:41 +0000 (0:00:01.256) 0:06:52.114 **** 2026-02-04 00:35:04.434271 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:35:04.434282 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:35:04.434293 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:35:04.434304 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:35:04.434315 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:35:04.434327 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:35:04.434338 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:35:04.434349 | orchestrator | 2026-02-04 00:35:04.434361 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-04 00:35:04.434372 | orchestrator | Wednesday 04 February 2026 00:34:44 +0000 (0:00:02.592) 0:06:54.707 **** 2026-02-04 00:35:04.434383 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:35:04.434392 | orchestrator | 2026-02-04 00:35:04.434402 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-04 00:35:04.434415 | orchestrator | Wednesday 04 February 2026 00:34:44 +0000 (0:00:00.093) 0:06:54.801 **** 2026-02-04 00:35:04.434427 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:04.434438 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:35:04.434449 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:35:04.434506 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:35:04.434532 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:35:04.434544 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:35:04.434556 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:35:04.434566 | orchestrator | 2026-02-04 00:35:04.434587 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-04 00:35:04.434596 | orchestrator | Wednesday 04 February 2026 00:34:45 +0000 (0:00:01.012) 0:06:55.813 **** 2026-02-04 00:35:04.434605 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:35:04.434613 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:35:04.434620 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:35:04.434628 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:35:04.434635 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:35:04.434643 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:35:04.434653 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:35:04.434665 | orchestrator | 2026-02-04 00:35:04.434676 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-04 00:35:04.434688 | orchestrator | Wednesday 04 February 2026 00:34:45 +0000 (0:00:00.843) 0:06:56.656 **** 2026-02-04 00:35:04.434701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:35:04.434716 | orchestrator | 2026-02-04 00:35:04.434728 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-04 00:35:04.434739 | orchestrator | Wednesday 04 February 2026 00:34:46 +0000 (0:00:00.964) 0:06:57.621 **** 2026-02-04 00:35:04.434748 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:04.434755 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:04.434793 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:04.434801 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:04.434807 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:04.434814 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:04.434820 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:04.434827 | orchestrator | 2026-02-04 00:35:04.434834 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-04 00:35:04.434840 | orchestrator | Wednesday 04 February 2026 00:34:47 +0000 (0:00:00.929) 0:06:58.550 **** 2026-02-04 00:35:04.434847 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-04 00:35:04.434871 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-04 00:35:04.434878 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-04 00:35:04.434885 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-04 00:35:04.434892 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-04 00:35:04.434898 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-04 00:35:04.434905 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-04 00:35:04.434912 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-04 00:35:04.434919 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-04 00:35:04.434925 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-04 00:35:04.434932 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-04 00:35:04.434938 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-04 00:35:04.434945 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-04 00:35:04.434952 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-04 00:35:04.434958 | orchestrator | 2026-02-04 00:35:04.434965 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-04 00:35:04.434972 | orchestrator | Wednesday 04 February 2026 00:34:50 +0000 (0:00:02.736) 0:07:01.287 **** 2026-02-04 00:35:04.434979 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:35:04.434991 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:35:04.435003 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:35:04.435023 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:35:04.435036 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:35:04.435048 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:35:04.435055 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:35:04.435062 | orchestrator | 2026-02-04 00:35:04.435069 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-04 00:35:04.435075 | orchestrator | Wednesday 04 February 2026 00:34:51 +0000 (0:00:00.555) 0:07:01.842 **** 2026-02-04 00:35:04.435084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:35:04.435092 | orchestrator | 2026-02-04 00:35:04.435099 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-04 00:35:04.435105 | orchestrator | Wednesday 04 February 2026 00:34:52 +0000 (0:00:00.889) 0:07:02.731 **** 2026-02-04 00:35:04.435112 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:04.435119 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:04.435125 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:04.435132 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:04.435138 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:04.435145 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:04.435151 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:04.435158 | orchestrator | 2026-02-04 00:35:04.435164 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-04 00:35:04.435171 | orchestrator | Wednesday 04 February 2026 00:34:53 +0000 (0:00:01.062) 0:07:03.794 **** 2026-02-04 00:35:04.435177 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:04.435184 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:04.435190 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:04.435197 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:04.435203 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:04.435210 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:04.435216 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:04.435223 | orchestrator | 2026-02-04 00:35:04.435229 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-04 00:35:04.435236 | orchestrator | Wednesday 04 February 2026 00:34:53 +0000 (0:00:00.854) 0:07:04.649 **** 2026-02-04 00:35:04.435243 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:35:04.435249 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:35:04.435261 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:35:04.435268 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:35:04.435275 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:35:04.435281 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:35:04.435288 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:35:04.435294 | orchestrator | 2026-02-04 00:35:04.435301 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-04 00:35:04.435307 | orchestrator | Wednesday 04 February 2026 00:34:54 +0000 (0:00:00.570) 0:07:05.220 **** 2026-02-04 00:35:04.435314 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:04.435321 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:04.435327 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:04.435334 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:04.435340 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:04.435346 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:04.435353 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:04.435359 | orchestrator | 2026-02-04 00:35:04.435369 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-04 00:35:04.435380 | orchestrator | Wednesday 04 February 2026 00:34:56 +0000 (0:00:01.554) 0:07:06.774 **** 2026-02-04 00:35:04.435391 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:35:04.435402 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:35:04.435411 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:35:04.435418 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:35:04.435430 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:35:04.435437 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:35:04.435443 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:35:04.435450 | orchestrator | 2026-02-04 00:35:04.435456 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-04 00:35:04.435463 | orchestrator | Wednesday 04 February 2026 00:34:56 +0000 (0:00:00.593) 0:07:07.368 **** 2026-02-04 00:35:04.435470 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:04.435476 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:35:04.435483 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:35:04.435489 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:35:04.435496 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:35:04.435502 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:35:04.435515 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:35:38.321654 | orchestrator | 2026-02-04 00:35:38.321865 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-04 00:35:38.321889 | orchestrator | Wednesday 04 February 2026 00:35:04 +0000 (0:00:07.830) 0:07:15.199 **** 2026-02-04 00:35:38.321901 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:35:38.321914 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:35:38.321925 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:35:38.321936 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:38.321948 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:35:38.321959 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:35:38.321970 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:35:38.321981 | orchestrator | 2026-02-04 00:35:38.321992 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-04 00:35:38.322003 | orchestrator | Wednesday 04 February 2026 00:35:05 +0000 (0:00:01.358) 0:07:16.557 **** 2026-02-04 00:35:38.322014 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:38.322123 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:35:38.322137 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:35:38.322150 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:35:38.322164 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:35:38.322177 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:35:38.322190 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:35:38.322202 | orchestrator | 2026-02-04 00:35:38.322216 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-04 00:35:38.322229 | orchestrator | Wednesday 04 February 2026 00:35:07 +0000 (0:00:01.752) 0:07:18.309 **** 2026-02-04 00:35:38.322241 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:35:38.322255 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:38.322268 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:35:38.322280 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:35:38.322293 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:35:38.322305 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:35:38.322317 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:35:38.322330 | orchestrator | 2026-02-04 00:35:38.322343 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-04 00:35:38.322357 | orchestrator | Wednesday 04 February 2026 00:35:09 +0000 (0:00:01.945) 0:07:20.255 **** 2026-02-04 00:35:38.322370 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:38.322382 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:38.322395 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:38.322408 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:38.322421 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:38.322433 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:38.322445 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:38.322459 | orchestrator | 2026-02-04 00:35:38.322472 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-04 00:35:38.322484 | orchestrator | Wednesday 04 February 2026 00:35:10 +0000 (0:00:01.318) 0:07:21.574 **** 2026-02-04 00:35:38.322496 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:35:38.322507 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:35:38.322577 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:35:38.322590 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:35:38.322601 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:35:38.322612 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:35:38.322623 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:35:38.322634 | orchestrator | 2026-02-04 00:35:38.322645 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-04 00:35:38.322656 | orchestrator | Wednesday 04 February 2026 00:35:11 +0000 (0:00:00.917) 0:07:22.491 **** 2026-02-04 00:35:38.322667 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:35:38.322678 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:35:38.322689 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:35:38.322700 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:35:38.322710 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:35:38.322721 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:35:38.322754 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:35:38.322766 | orchestrator | 2026-02-04 00:35:38.322777 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-04 00:35:38.322788 | orchestrator | Wednesday 04 February 2026 00:35:12 +0000 (0:00:00.581) 0:07:23.073 **** 2026-02-04 00:35:38.322799 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:38.322810 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:38.322820 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:38.322831 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:38.322841 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:38.322852 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:38.322863 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:38.322873 | orchestrator | 2026-02-04 00:35:38.322884 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-04 00:35:38.322895 | orchestrator | Wednesday 04 February 2026 00:35:12 +0000 (0:00:00.560) 0:07:23.633 **** 2026-02-04 00:35:38.322906 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:38.322916 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:38.322927 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:38.322938 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:38.322948 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:38.322959 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:38.322969 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:38.322980 | orchestrator | 2026-02-04 00:35:38.322991 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-04 00:35:38.323002 | orchestrator | Wednesday 04 February 2026 00:35:13 +0000 (0:00:00.749) 0:07:24.382 **** 2026-02-04 00:35:38.323012 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:38.323023 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:38.323034 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:38.323044 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:38.323055 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:38.323065 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:38.323076 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:38.323086 | orchestrator | 2026-02-04 00:35:38.323097 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-04 00:35:38.323108 | orchestrator | Wednesday 04 February 2026 00:35:14 +0000 (0:00:00.585) 0:07:24.968 **** 2026-02-04 00:35:38.323118 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:38.323129 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:38.323140 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:38.323150 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:38.323161 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:38.323246 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:38.323261 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:38.323272 | orchestrator | 2026-02-04 00:35:38.323304 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-04 00:35:38.323316 | orchestrator | Wednesday 04 February 2026 00:35:19 +0000 (0:00:05.440) 0:07:30.408 **** 2026-02-04 00:35:38.323327 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:35:38.323367 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:35:38.323379 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:35:38.323389 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:35:38.323400 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:35:38.323411 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:35:38.323422 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:35:38.323433 | orchestrator | 2026-02-04 00:35:38.323444 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-04 00:35:38.323454 | orchestrator | Wednesday 04 February 2026 00:35:20 +0000 (0:00:00.585) 0:07:30.994 **** 2026-02-04 00:35:38.323484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:35:38.323499 | orchestrator | 2026-02-04 00:35:38.323510 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-04 00:35:38.323521 | orchestrator | Wednesday 04 February 2026 00:35:21 +0000 (0:00:01.124) 0:07:32.119 **** 2026-02-04 00:35:38.323532 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:38.323542 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:38.323553 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:38.323564 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:38.323574 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:38.323585 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:38.323595 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:38.323606 | orchestrator | 2026-02-04 00:35:38.323617 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-04 00:35:38.323628 | orchestrator | Wednesday 04 February 2026 00:35:23 +0000 (0:00:01.984) 0:07:34.104 **** 2026-02-04 00:35:38.323638 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:38.323649 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:38.323660 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:38.323670 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:38.323681 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:38.323691 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:38.323702 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:38.323712 | orchestrator | 2026-02-04 00:35:38.323723 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-04 00:35:38.323757 | orchestrator | Wednesday 04 February 2026 00:35:24 +0000 (0:00:01.187) 0:07:35.292 **** 2026-02-04 00:35:38.323776 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:38.323794 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:38.323811 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:38.323822 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:38.323833 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:38.323844 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:38.323855 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:38.323865 | orchestrator | 2026-02-04 00:35:38.323876 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-04 00:35:38.323887 | orchestrator | Wednesday 04 February 2026 00:35:25 +0000 (0:00:00.962) 0:07:36.254 **** 2026-02-04 00:35:38.323898 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:35:38.323910 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:35:38.323922 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:35:38.323938 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:35:38.323949 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:35:38.323968 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:35:38.323978 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:35:38.323989 | orchestrator | 2026-02-04 00:35:38.324000 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-04 00:35:38.324011 | orchestrator | Wednesday 04 February 2026 00:35:27 +0000 (0:00:01.975) 0:07:38.230 **** 2026-02-04 00:35:38.324022 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:35:38.324033 | orchestrator | 2026-02-04 00:35:38.324044 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-04 00:35:38.324057 | orchestrator | Wednesday 04 February 2026 00:35:28 +0000 (0:00:00.892) 0:07:39.122 **** 2026-02-04 00:35:38.324076 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:35:38.324092 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:35:38.324109 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:35:38.324129 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:35:38.324148 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:35:38.324165 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:35:38.324176 | orchestrator | changed: [testbed-manager] 2026-02-04 00:35:38.324187 | orchestrator | 2026-02-04 00:35:38.324207 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-04 00:36:10.260978 | orchestrator | Wednesday 04 February 2026 00:35:38 +0000 (0:00:09.878) 0:07:49.001 **** 2026-02-04 00:36:10.261099 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:36:10.261115 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:36:10.261127 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:36:10.261139 | orchestrator | ok: [testbed-manager] 2026-02-04 00:36:10.261150 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:36:10.261161 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:36:10.261172 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:36:10.261183 | orchestrator | 2026-02-04 00:36:10.261195 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-04 00:36:10.261207 | orchestrator | Wednesday 04 February 2026 00:35:40 +0000 (0:00:02.100) 0:07:51.101 **** 2026-02-04 00:36:10.261218 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:36:10.261229 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:36:10.261240 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:36:10.261251 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:36:10.261262 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:36:10.261273 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:36:10.261284 | orchestrator | 2026-02-04 00:36:10.261295 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-04 00:36:10.261306 | orchestrator | Wednesday 04 February 2026 00:35:41 +0000 (0:00:01.355) 0:07:52.457 **** 2026-02-04 00:36:10.261317 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:10.261329 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:10.261343 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:10.261357 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:10.261370 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:10.261383 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:10.261396 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:10.261409 | orchestrator | 2026-02-04 00:36:10.261422 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-04 00:36:10.261435 | orchestrator | 2026-02-04 00:36:10.261449 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-04 00:36:10.261462 | orchestrator | Wednesday 04 February 2026 00:35:43 +0000 (0:00:01.521) 0:07:53.979 **** 2026-02-04 00:36:10.261475 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:36:10.261515 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:36:10.261529 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:36:10.261542 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:36:10.261555 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:36:10.261567 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:36:10.261579 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:36:10.261591 | orchestrator | 2026-02-04 00:36:10.261604 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-04 00:36:10.261616 | orchestrator | 2026-02-04 00:36:10.261628 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-04 00:36:10.261641 | orchestrator | Wednesday 04 February 2026 00:35:43 +0000 (0:00:00.568) 0:07:54.548 **** 2026-02-04 00:36:10.261654 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:10.261667 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:10.261680 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:10.261694 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:10.261741 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:10.261753 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:10.261768 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:10.261786 | orchestrator | 2026-02-04 00:36:10.261807 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-04 00:36:10.261825 | orchestrator | Wednesday 04 February 2026 00:35:45 +0000 (0:00:01.357) 0:07:55.905 **** 2026-02-04 00:36:10.261846 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:36:10.261858 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:36:10.261869 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:36:10.261880 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:36:10.261891 | orchestrator | ok: [testbed-manager] 2026-02-04 00:36:10.261901 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:36:10.261912 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:36:10.261928 | orchestrator | 2026-02-04 00:36:10.261943 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-04 00:36:10.261954 | orchestrator | Wednesday 04 February 2026 00:35:46 +0000 (0:00:01.599) 0:07:57.505 **** 2026-02-04 00:36:10.261965 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:36:10.261990 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:36:10.262002 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:36:10.262077 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:36:10.262090 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:36:10.262101 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:36:10.262112 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:36:10.262123 | orchestrator | 2026-02-04 00:36:10.262133 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-04 00:36:10.262144 | orchestrator | Wednesday 04 February 2026 00:35:47 +0000 (0:00:00.844) 0:07:58.349 **** 2026-02-04 00:36:10.262156 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:36:10.262168 | orchestrator | 2026-02-04 00:36:10.262179 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-04 00:36:10.262190 | orchestrator | Wednesday 04 February 2026 00:35:48 +0000 (0:00:00.892) 0:07:59.241 **** 2026-02-04 00:36:10.262203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:36:10.262217 | orchestrator | 2026-02-04 00:36:10.262228 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-04 00:36:10.262238 | orchestrator | Wednesday 04 February 2026 00:35:49 +0000 (0:00:00.860) 0:08:00.102 **** 2026-02-04 00:36:10.262249 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:10.262260 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:10.262271 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:10.262282 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:10.262302 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:10.262313 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:10.262324 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:10.262335 | orchestrator | 2026-02-04 00:36:10.262365 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-04 00:36:10.262376 | orchestrator | Wednesday 04 February 2026 00:35:58 +0000 (0:00:09.003) 0:08:09.105 **** 2026-02-04 00:36:10.262387 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:10.262398 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:10.262409 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:10.262419 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:10.262430 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:10.262441 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:10.262452 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:10.262463 | orchestrator | 2026-02-04 00:36:10.262474 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-04 00:36:10.262485 | orchestrator | Wednesday 04 February 2026 00:35:59 +0000 (0:00:00.981) 0:08:10.086 **** 2026-02-04 00:36:10.262496 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:10.262506 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:10.262517 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:10.262528 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:10.262539 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:10.262549 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:10.262560 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:10.262571 | orchestrator | 2026-02-04 00:36:10.262582 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-04 00:36:10.262593 | orchestrator | Wednesday 04 February 2026 00:36:00 +0000 (0:00:01.365) 0:08:11.452 **** 2026-02-04 00:36:10.262604 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:10.262615 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:10.262625 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:10.262636 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:10.262646 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:10.262657 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:10.262668 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:10.262679 | orchestrator | 2026-02-04 00:36:10.262690 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-04 00:36:10.262701 | orchestrator | Wednesday 04 February 2026 00:36:02 +0000 (0:00:01.991) 0:08:13.443 **** 2026-02-04 00:36:10.262731 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:10.262742 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:10.262753 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:10.262763 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:10.262774 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:10.262785 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:10.262795 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:10.262806 | orchestrator | 2026-02-04 00:36:10.262817 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-04 00:36:10.262827 | orchestrator | Wednesday 04 February 2026 00:36:04 +0000 (0:00:01.332) 0:08:14.775 **** 2026-02-04 00:36:10.262839 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:10.262849 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:10.262860 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:10.262871 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:10.262882 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:10.262892 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:10.262903 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:10.262914 | orchestrator | 2026-02-04 00:36:10.262925 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-04 00:36:10.262936 | orchestrator | 2026-02-04 00:36:10.262947 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-04 00:36:10.262958 | orchestrator | Wednesday 04 February 2026 00:36:05 +0000 (0:00:01.175) 0:08:15.951 **** 2026-02-04 00:36:10.262977 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:36:10.262988 | orchestrator | 2026-02-04 00:36:10.262999 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-04 00:36:10.263010 | orchestrator | Wednesday 04 February 2026 00:36:06 +0000 (0:00:01.030) 0:08:16.982 **** 2026-02-04 00:36:10.263021 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:36:10.263037 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:36:10.263048 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:36:10.263059 | orchestrator | ok: [testbed-manager] 2026-02-04 00:36:10.263070 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:36:10.263081 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:36:10.263092 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:36:10.263102 | orchestrator | 2026-02-04 00:36:10.263113 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-04 00:36:10.263124 | orchestrator | Wednesday 04 February 2026 00:36:07 +0000 (0:00:00.842) 0:08:17.824 **** 2026-02-04 00:36:10.263135 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:10.263146 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:10.263157 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:10.263167 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:10.263178 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:10.263189 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:10.263200 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:10.263211 | orchestrator | 2026-02-04 00:36:10.263222 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-04 00:36:10.263232 | orchestrator | Wednesday 04 February 2026 00:36:08 +0000 (0:00:01.181) 0:08:19.006 **** 2026-02-04 00:36:10.263244 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:36:10.263255 | orchestrator | 2026-02-04 00:36:10.263265 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-04 00:36:10.263276 | orchestrator | Wednesday 04 February 2026 00:36:09 +0000 (0:00:01.054) 0:08:20.060 **** 2026-02-04 00:36:10.263288 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:36:10.263307 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:36:10.263326 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:36:10.263343 | orchestrator | ok: [testbed-manager] 2026-02-04 00:36:10.263359 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:36:10.263374 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:36:10.263391 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:36:10.263407 | orchestrator | 2026-02-04 00:36:10.263436 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-04 00:36:11.934254 | orchestrator | Wednesday 04 February 2026 00:36:10 +0000 (0:00:00.877) 0:08:20.937 **** 2026-02-04 00:36:11.934357 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:11.934374 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:11.934386 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:11.934397 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:11.934408 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:11.934419 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:11.934429 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:11.934440 | orchestrator | 2026-02-04 00:36:11.934452 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:36:11.934478 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-04 00:36:11.934491 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-04 00:36:11.934502 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-04 00:36:11.934542 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-04 00:36:11.934554 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-04 00:36:11.934565 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-04 00:36:11.934575 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-04 00:36:11.934586 | orchestrator | 2026-02-04 00:36:11.934597 | orchestrator | 2026-02-04 00:36:11.934608 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:36:11.934619 | orchestrator | Wednesday 04 February 2026 00:36:11 +0000 (0:00:01.170) 0:08:22.108 **** 2026-02-04 00:36:11.934630 | orchestrator | =============================================================================== 2026-02-04 00:36:11.934641 | orchestrator | osism.commons.packages : Install required packages --------------------- 72.01s 2026-02-04 00:36:11.934652 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.14s 2026-02-04 00:36:11.934662 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.47s 2026-02-04 00:36:11.934673 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.16s 2026-02-04 00:36:11.934684 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.98s 2026-02-04 00:36:11.934695 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.99s 2026-02-04 00:36:11.934731 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.07s 2026-02-04 00:36:11.934742 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.24s 2026-02-04 00:36:11.934753 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.02s 2026-02-04 00:36:11.934764 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.88s 2026-02-04 00:36:11.934775 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.11s 2026-02-04 00:36:11.934801 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.00s 2026-02-04 00:36:11.934816 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.66s 2026-02-04 00:36:11.934828 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.32s 2026-02-04 00:36:11.934841 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.17s 2026-02-04 00:36:11.934854 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 8.13s 2026-02-04 00:36:11.934867 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.83s 2026-02-04 00:36:11.934880 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.59s 2026-02-04 00:36:11.934893 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.26s 2026-02-04 00:36:11.934905 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.07s 2026-02-04 00:36:12.296188 | orchestrator | + osism apply fail2ban 2026-02-04 00:36:25.337812 | orchestrator | 2026-02-04 00:36:25 | INFO  | Prepare task for execution of fail2ban. 2026-02-04 00:36:25.443215 | orchestrator | 2026-02-04 00:36:25 | INFO  | Task 36039cc2-0bca-4d76-9b83-bda0f38cef3e (fail2ban) was prepared for execution. 2026-02-04 00:36:25.443298 | orchestrator | 2026-02-04 00:36:25 | INFO  | It takes a moment until task 36039cc2-0bca-4d76-9b83-bda0f38cef3e (fail2ban) has been started and output is visible here. 2026-02-04 00:36:48.555194 | orchestrator | 2026-02-04 00:36:48.555314 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-04 00:36:48.555362 | orchestrator | 2026-02-04 00:36:48.555374 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-04 00:36:48.555386 | orchestrator | Wednesday 04 February 2026 00:36:30 +0000 (0:00:00.307) 0:00:00.307 **** 2026-02-04 00:36:48.555399 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:36:48.555412 | orchestrator | 2026-02-04 00:36:48.555424 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-04 00:36:48.555435 | orchestrator | Wednesday 04 February 2026 00:36:31 +0000 (0:00:01.224) 0:00:01.532 **** 2026-02-04 00:36:48.555446 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:48.555458 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:48.555469 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:48.555480 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:48.555510 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:48.555532 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:48.555543 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:48.555554 | orchestrator | 2026-02-04 00:36:48.555565 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-04 00:36:48.555576 | orchestrator | Wednesday 04 February 2026 00:36:43 +0000 (0:00:11.524) 0:00:13.057 **** 2026-02-04 00:36:48.555587 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:48.555598 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:48.555608 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:48.555619 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:48.555630 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:48.555640 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:48.555651 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:48.555662 | orchestrator | 2026-02-04 00:36:48.555673 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-04 00:36:48.555684 | orchestrator | Wednesday 04 February 2026 00:36:44 +0000 (0:00:01.490) 0:00:14.548 **** 2026-02-04 00:36:48.555695 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:36:48.555741 | orchestrator | ok: [testbed-manager] 2026-02-04 00:36:48.555755 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:36:48.555768 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:36:48.555780 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:36:48.555793 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:36:48.555805 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:36:48.555817 | orchestrator | 2026-02-04 00:36:48.555831 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-04 00:36:48.555843 | orchestrator | Wednesday 04 February 2026 00:36:46 +0000 (0:00:01.562) 0:00:16.110 **** 2026-02-04 00:36:48.555857 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:48.555869 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:48.555883 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:48.555894 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:48.555907 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:48.555919 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:48.555931 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:48.555943 | orchestrator | 2026-02-04 00:36:48.555956 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:36:48.555969 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:36:48.555983 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:36:48.555995 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:36:48.556008 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:36:48.556049 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:36:48.556063 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:36:48.556076 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:36:48.556088 | orchestrator | 2026-02-04 00:36:48.556101 | orchestrator | 2026-02-04 00:36:48.556113 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:36:48.556124 | orchestrator | Wednesday 04 February 2026 00:36:48 +0000 (0:00:01.758) 0:00:17.869 **** 2026-02-04 00:36:48.556135 | orchestrator | =============================================================================== 2026-02-04 00:36:48.556146 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.53s 2026-02-04 00:36:48.556157 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.76s 2026-02-04 00:36:48.556169 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.56s 2026-02-04 00:36:48.556188 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.49s 2026-02-04 00:36:48.556207 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.22s 2026-02-04 00:36:48.908626 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-04 00:36:48.908816 | orchestrator | + osism apply network 2026-02-04 00:37:01.009637 | orchestrator | 2026-02-04 00:37:01 | INFO  | Prepare task for execution of network. 2026-02-04 00:37:01.079957 | orchestrator | 2026-02-04 00:37:01 | INFO  | Task 5449a707-e968-4969-b227-f8c36b6b9d23 (network) was prepared for execution. 2026-02-04 00:37:01.080051 | orchestrator | 2026-02-04 00:37:01 | INFO  | It takes a moment until task 5449a707-e968-4969-b227-f8c36b6b9d23 (network) has been started and output is visible here. 2026-02-04 00:37:32.077255 | orchestrator | 2026-02-04 00:37:32.077377 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-04 00:37:32.077394 | orchestrator | 2026-02-04 00:37:32.077406 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-04 00:37:32.077418 | orchestrator | Wednesday 04 February 2026 00:37:05 +0000 (0:00:00.304) 0:00:00.304 **** 2026-02-04 00:37:32.077429 | orchestrator | ok: [testbed-manager] 2026-02-04 00:37:32.077441 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:37:32.077453 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:37:32.077464 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:37:32.077475 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:37:32.077486 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:37:32.077497 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:37:32.077508 | orchestrator | 2026-02-04 00:37:32.077519 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-04 00:37:32.077531 | orchestrator | Wednesday 04 February 2026 00:37:06 +0000 (0:00:00.749) 0:00:01.054 **** 2026-02-04 00:37:32.077544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:37:32.077558 | orchestrator | 2026-02-04 00:37:32.077569 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-04 00:37:32.077581 | orchestrator | Wednesday 04 February 2026 00:37:07 +0000 (0:00:01.286) 0:00:02.341 **** 2026-02-04 00:37:32.077592 | orchestrator | ok: [testbed-manager] 2026-02-04 00:37:32.077603 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:37:32.077614 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:37:32.077625 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:37:32.077636 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:37:32.077670 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:37:32.077682 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:37:32.077750 | orchestrator | 2026-02-04 00:37:32.077770 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-04 00:37:32.077790 | orchestrator | Wednesday 04 February 2026 00:37:10 +0000 (0:00:02.383) 0:00:04.724 **** 2026-02-04 00:37:32.077808 | orchestrator | ok: [testbed-manager] 2026-02-04 00:37:32.077822 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:37:32.077840 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:37:32.077860 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:37:32.077879 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:37:32.077891 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:37:32.077904 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:37:32.077918 | orchestrator | 2026-02-04 00:37:32.077931 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-04 00:37:32.077944 | orchestrator | Wednesday 04 February 2026 00:37:12 +0000 (0:00:01.971) 0:00:06.695 **** 2026-02-04 00:37:32.077957 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-04 00:37:32.077970 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-04 00:37:32.077983 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-04 00:37:32.077997 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-04 00:37:32.078010 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-04 00:37:32.078148 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-04 00:37:32.078163 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-04 00:37:32.078175 | orchestrator | 2026-02-04 00:37:32.078186 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-04 00:37:32.078198 | orchestrator | Wednesday 04 February 2026 00:37:13 +0000 (0:00:01.083) 0:00:07.779 **** 2026-02-04 00:37:32.078208 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 00:37:32.078220 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:37:32.078231 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:37:32.078246 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 00:37:32.078266 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 00:37:32.078287 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 00:37:32.078299 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 00:37:32.078309 | orchestrator | 2026-02-04 00:37:32.078320 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-04 00:37:32.078331 | orchestrator | Wednesday 04 February 2026 00:37:16 +0000 (0:00:03.572) 0:00:11.351 **** 2026-02-04 00:37:32.078342 | orchestrator | changed: [testbed-manager] 2026-02-04 00:37:32.078353 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:37:32.078364 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:37:32.078375 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:37:32.078386 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:37:32.078396 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:37:32.078407 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:37:32.078418 | orchestrator | 2026-02-04 00:37:32.078428 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-04 00:37:32.078439 | orchestrator | Wednesday 04 February 2026 00:37:18 +0000 (0:00:01.720) 0:00:13.071 **** 2026-02-04 00:37:32.078450 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:37:32.078461 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:37:32.078471 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 00:37:32.078502 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 00:37:32.078513 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 00:37:32.078524 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 00:37:32.078535 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 00:37:32.078546 | orchestrator | 2026-02-04 00:37:32.078556 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-04 00:37:32.078568 | orchestrator | Wednesday 04 February 2026 00:37:20 +0000 (0:00:01.893) 0:00:14.965 **** 2026-02-04 00:37:32.078590 | orchestrator | ok: [testbed-manager] 2026-02-04 00:37:32.078601 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:37:32.078612 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:37:32.078623 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:37:32.078633 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:37:32.078644 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:37:32.078655 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:37:32.078666 | orchestrator | 2026-02-04 00:37:32.078677 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-04 00:37:32.078733 | orchestrator | Wednesday 04 February 2026 00:37:21 +0000 (0:00:01.194) 0:00:16.160 **** 2026-02-04 00:37:32.078746 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:37:32.078758 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:37:32.078769 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:37:32.078780 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:37:32.078791 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:37:32.078802 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:37:32.078813 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:37:32.078824 | orchestrator | 2026-02-04 00:37:32.078835 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-04 00:37:32.078847 | orchestrator | Wednesday 04 February 2026 00:37:22 +0000 (0:00:00.843) 0:00:17.003 **** 2026-02-04 00:37:32.078858 | orchestrator | ok: [testbed-manager] 2026-02-04 00:37:32.078869 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:37:32.078880 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:37:32.078891 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:37:32.078902 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:37:32.078913 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:37:32.078924 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:37:32.078935 | orchestrator | 2026-02-04 00:37:32.078946 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-04 00:37:32.078957 | orchestrator | Wednesday 04 February 2026 00:37:24 +0000 (0:00:02.340) 0:00:19.344 **** 2026-02-04 00:37:32.078968 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:37:32.078979 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:37:32.078990 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:37:32.079001 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:37:32.079013 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:37:32.079024 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:37:32.079036 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-04 00:37:32.079049 | orchestrator | 2026-02-04 00:37:32.079060 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-04 00:37:32.079078 | orchestrator | Wednesday 04 February 2026 00:37:25 +0000 (0:00:00.983) 0:00:20.328 **** 2026-02-04 00:37:32.079098 | orchestrator | ok: [testbed-manager] 2026-02-04 00:37:32.079117 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:37:32.079129 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:37:32.079139 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:37:32.079150 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:37:32.079160 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:37:32.079171 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:37:32.079182 | orchestrator | 2026-02-04 00:37:32.079193 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-04 00:37:32.079204 | orchestrator | Wednesday 04 February 2026 00:37:27 +0000 (0:00:01.780) 0:00:22.109 **** 2026-02-04 00:37:32.079215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:37:32.079228 | orchestrator | 2026-02-04 00:37:32.079239 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-04 00:37:32.079259 | orchestrator | Wednesday 04 February 2026 00:37:28 +0000 (0:00:01.351) 0:00:23.460 **** 2026-02-04 00:37:32.079270 | orchestrator | ok: [testbed-manager] 2026-02-04 00:37:32.079281 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:37:32.079292 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:37:32.079303 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:37:32.079313 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:37:32.079324 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:37:32.079335 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:37:32.079346 | orchestrator | 2026-02-04 00:37:32.079356 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-04 00:37:32.079367 | orchestrator | Wednesday 04 February 2026 00:37:29 +0000 (0:00:01.201) 0:00:24.662 **** 2026-02-04 00:37:32.079378 | orchestrator | ok: [testbed-manager] 2026-02-04 00:37:32.079395 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:37:32.079406 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:37:32.079416 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:37:32.079427 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:37:32.079438 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:37:32.079449 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:37:32.079459 | orchestrator | 2026-02-04 00:37:32.079470 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-04 00:37:32.079481 | orchestrator | Wednesday 04 February 2026 00:37:30 +0000 (0:00:00.734) 0:00:25.397 **** 2026-02-04 00:37:32.079492 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:37:32.079503 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:37:32.079514 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:37:32.079524 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:37:32.079535 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:37:32.079546 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:37:32.079556 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:37:32.079567 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:37:32.079578 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:37:32.079589 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:37:32.079599 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:37:32.079610 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:37:32.079621 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:37:32.079632 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:37:32.079642 | orchestrator | 2026-02-04 00:37:32.079660 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-04 00:37:49.328643 | orchestrator | Wednesday 04 February 2026 00:37:32 +0000 (0:00:01.351) 0:00:26.749 **** 2026-02-04 00:37:49.328924 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:37:49.328945 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:37:49.328957 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:37:49.328969 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:37:49.328980 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:37:49.328991 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:37:49.329002 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:37:49.329014 | orchestrator | 2026-02-04 00:37:49.329025 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-04 00:37:49.329037 | orchestrator | Wednesday 04 February 2026 00:37:32 +0000 (0:00:00.666) 0:00:27.415 **** 2026-02-04 00:37:49.329051 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-02-04 00:37:49.329091 | orchestrator | 2026-02-04 00:37:49.329103 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-04 00:37:49.329114 | orchestrator | Wednesday 04 February 2026 00:37:37 +0000 (0:00:04.716) 0:00:32.132 **** 2026-02-04 00:37:49.329127 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329176 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329328 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329378 | orchestrator | 2026-02-04 00:37:49.329397 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-04 00:37:49.329414 | orchestrator | Wednesday 04 February 2026 00:37:43 +0000 (0:00:06.145) 0:00:38.278 **** 2026-02-04 00:37:49.329431 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329528 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329557 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:37:49.329641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:37:49.329764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:38:03.698335 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:38:03.698433 | orchestrator | 2026-02-04 00:38:03.698449 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-04 00:38:03.698461 | orchestrator | Wednesday 04 February 2026 00:37:49 +0000 (0:00:06.103) 0:00:44.381 **** 2026-02-04 00:38:03.698474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:38:03.698485 | orchestrator | 2026-02-04 00:38:03.698495 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-04 00:38:03.698505 | orchestrator | Wednesday 04 February 2026 00:37:51 +0000 (0:00:01.343) 0:00:45.724 **** 2026-02-04 00:38:03.698515 | orchestrator | ok: [testbed-manager] 2026-02-04 00:38:03.698526 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:38:03.698536 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:38:03.698546 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:38:03.698556 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:38:03.698565 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:38:03.698575 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:38:03.698585 | orchestrator | 2026-02-04 00:38:03.698595 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-04 00:38:03.698604 | orchestrator | Wednesday 04 February 2026 00:37:52 +0000 (0:00:01.211) 0:00:46.936 **** 2026-02-04 00:38:03.698615 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:38:03.698625 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:38:03.698635 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:38:03.698645 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:38:03.698654 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:38:03.698664 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:38:03.698708 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:38:03.698720 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:38:03.698730 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:38:03.698740 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:38:03.698750 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:38:03.698760 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:38:03.698769 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:38:03.698779 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:38:03.698805 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:38:03.698815 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:38:03.698825 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:38:03.698857 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:38:03.698867 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:38:03.698877 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:38:03.698890 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:38:03.698901 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:38:03.698912 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:38:03.698924 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:38:03.698936 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:38:03.698947 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:38:03.698958 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:38:03.698968 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:38:03.698977 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:38:03.698987 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:38:03.698997 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:38:03.699006 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:38:03.699016 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:38:03.699026 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:38:03.699035 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:38:03.699045 | orchestrator | 2026-02-04 00:38:03.699055 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-02-04 00:38:03.699079 | orchestrator | Wednesday 04 February 2026 00:37:53 +0000 (0:00:01.018) 0:00:47.955 **** 2026-02-04 00:38:03.699090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:38:03.699100 | orchestrator | 2026-02-04 00:38:03.699110 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-02-04 00:38:03.699119 | orchestrator | Wednesday 04 February 2026 00:37:54 +0000 (0:00:01.375) 0:00:49.330 **** 2026-02-04 00:38:03.699129 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:38:03.699139 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:38:03.699149 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:38:03.699158 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:38:03.699168 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:38:03.699177 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:38:03.699187 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:38:03.699196 | orchestrator | 2026-02-04 00:38:03.699206 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-02-04 00:38:03.699216 | orchestrator | Wednesday 04 February 2026 00:37:55 +0000 (0:00:00.710) 0:00:50.041 **** 2026-02-04 00:38:03.699225 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:38:03.699235 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:38:03.699244 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:38:03.699254 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:38:03.699263 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:38:03.699273 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:38:03.699282 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:38:03.699292 | orchestrator | 2026-02-04 00:38:03.699301 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-02-04 00:38:03.699311 | orchestrator | Wednesday 04 February 2026 00:37:56 +0000 (0:00:00.912) 0:00:50.954 **** 2026-02-04 00:38:03.699320 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:38:03.699336 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:38:03.699346 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:38:03.699356 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:38:03.699365 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:38:03.699375 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:38:03.699385 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:38:03.699394 | orchestrator | 2026-02-04 00:38:03.699404 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-02-04 00:38:03.699414 | orchestrator | Wednesday 04 February 2026 00:37:56 +0000 (0:00:00.667) 0:00:51.621 **** 2026-02-04 00:38:03.699423 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:38:03.699433 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:38:03.699443 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:38:03.699453 | orchestrator | ok: [testbed-manager] 2026-02-04 00:38:03.699462 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:38:03.699472 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:38:03.699482 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:38:03.699492 | orchestrator | 2026-02-04 00:38:03.699501 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-02-04 00:38:03.699511 | orchestrator | Wednesday 04 February 2026 00:37:58 +0000 (0:00:01.831) 0:00:53.452 **** 2026-02-04 00:38:03.699521 | orchestrator | ok: [testbed-manager] 2026-02-04 00:38:03.699531 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:38:03.699540 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:38:03.699550 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:38:03.699560 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:38:03.699569 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:38:03.699579 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:38:03.699588 | orchestrator | 2026-02-04 00:38:03.699598 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-02-04 00:38:03.699613 | orchestrator | Wednesday 04 February 2026 00:37:59 +0000 (0:00:00.991) 0:00:54.444 **** 2026-02-04 00:38:03.699623 | orchestrator | ok: [testbed-manager] 2026-02-04 00:38:03.699633 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:38:03.699643 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:38:03.699652 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:38:03.699662 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:38:03.699702 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:38:03.699713 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:38:03.699722 | orchestrator | 2026-02-04 00:38:03.699732 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-04 00:38:03.699742 | orchestrator | Wednesday 04 February 2026 00:38:02 +0000 (0:00:02.479) 0:00:56.923 **** 2026-02-04 00:38:03.699752 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:38:03.699761 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:38:03.699771 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:38:03.699781 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:38:03.699791 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:38:03.699800 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:38:03.699810 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:38:03.699820 | orchestrator | 2026-02-04 00:38:03.699830 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-04 00:38:03.699840 | orchestrator | Wednesday 04 February 2026 00:38:03 +0000 (0:00:00.840) 0:00:57.763 **** 2026-02-04 00:38:03.699849 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:38:03.699859 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:38:03.699869 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:38:03.699878 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:38:03.699888 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:38:03.699898 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:38:03.699907 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:38:03.699917 | orchestrator | 2026-02-04 00:38:03.699927 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:38:03.699937 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-04 00:38:03.699954 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 00:38:03.699970 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 00:38:04.155572 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 00:38:04.155717 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 00:38:04.155737 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 00:38:04.155749 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 00:38:04.155761 | orchestrator | 2026-02-04 00:38:04.155772 | orchestrator | 2026-02-04 00:38:04.155785 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:38:04.155797 | orchestrator | Wednesday 04 February 2026 00:38:03 +0000 (0:00:00.601) 0:00:58.365 **** 2026-02-04 00:38:04.155808 | orchestrator | =============================================================================== 2026-02-04 00:38:04.155819 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.15s 2026-02-04 00:38:04.155830 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.10s 2026-02-04 00:38:04.155841 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.72s 2026-02-04 00:38:04.155852 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.57s 2026-02-04 00:38:04.155863 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.48s 2026-02-04 00:38:04.155874 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.38s 2026-02-04 00:38:04.155885 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.34s 2026-02-04 00:38:04.155895 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.97s 2026-02-04 00:38:04.155906 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.89s 2026-02-04 00:38:04.155917 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.83s 2026-02-04 00:38:04.155928 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.78s 2026-02-04 00:38:04.155939 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.72s 2026-02-04 00:38:04.155949 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.38s 2026-02-04 00:38:04.155960 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.35s 2026-02-04 00:38:04.155971 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.35s 2026-02-04 00:38:04.155982 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.34s 2026-02-04 00:38:04.155993 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.29s 2026-02-04 00:38:04.156003 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2026-02-04 00:38:04.156014 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.20s 2026-02-04 00:38:04.156026 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2026-02-04 00:38:04.501858 | orchestrator | + osism apply wireguard 2026-02-04 00:38:16.625335 | orchestrator | 2026-02-04 00:38:16 | INFO  | Prepare task for execution of wireguard. 2026-02-04 00:38:16.698747 | orchestrator | 2026-02-04 00:38:16 | INFO  | Task 8940d35d-51c3-47c0-9679-465c4c754adc (wireguard) was prepared for execution. 2026-02-04 00:38:16.698860 | orchestrator | 2026-02-04 00:38:16 | INFO  | It takes a moment until task 8940d35d-51c3-47c0-9679-465c4c754adc (wireguard) has been started and output is visible here. 2026-02-04 00:38:38.514563 | orchestrator | 2026-02-04 00:38:38.514730 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-04 00:38:38.514750 | orchestrator | 2026-02-04 00:38:38.514762 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-04 00:38:38.514774 | orchestrator | Wednesday 04 February 2026 00:38:21 +0000 (0:00:00.260) 0:00:00.260 **** 2026-02-04 00:38:38.514785 | orchestrator | ok: [testbed-manager] 2026-02-04 00:38:38.514797 | orchestrator | 2026-02-04 00:38:38.514808 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-04 00:38:38.514819 | orchestrator | Wednesday 04 February 2026 00:38:23 +0000 (0:00:01.781) 0:00:02.042 **** 2026-02-04 00:38:38.514830 | orchestrator | changed: [testbed-manager] 2026-02-04 00:38:38.514842 | orchestrator | 2026-02-04 00:38:38.514853 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-04 00:38:38.514864 | orchestrator | Wednesday 04 February 2026 00:38:30 +0000 (0:00:07.374) 0:00:09.416 **** 2026-02-04 00:38:38.514874 | orchestrator | changed: [testbed-manager] 2026-02-04 00:38:38.514885 | orchestrator | 2026-02-04 00:38:38.514896 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-04 00:38:38.514907 | orchestrator | Wednesday 04 February 2026 00:38:31 +0000 (0:00:00.568) 0:00:09.985 **** 2026-02-04 00:38:38.514918 | orchestrator | changed: [testbed-manager] 2026-02-04 00:38:38.514928 | orchestrator | 2026-02-04 00:38:38.514939 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-04 00:38:38.514950 | orchestrator | Wednesday 04 February 2026 00:38:31 +0000 (0:00:00.462) 0:00:10.448 **** 2026-02-04 00:38:38.514961 | orchestrator | ok: [testbed-manager] 2026-02-04 00:38:38.514972 | orchestrator | 2026-02-04 00:38:38.514982 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-04 00:38:38.514993 | orchestrator | Wednesday 04 February 2026 00:38:32 +0000 (0:00:00.771) 0:00:11.219 **** 2026-02-04 00:38:38.515004 | orchestrator | ok: [testbed-manager] 2026-02-04 00:38:38.515015 | orchestrator | 2026-02-04 00:38:38.515025 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-04 00:38:38.515036 | orchestrator | Wednesday 04 February 2026 00:38:32 +0000 (0:00:00.434) 0:00:11.653 **** 2026-02-04 00:38:38.515047 | orchestrator | ok: [testbed-manager] 2026-02-04 00:38:38.515058 | orchestrator | 2026-02-04 00:38:38.515068 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-04 00:38:38.515079 | orchestrator | Wednesday 04 February 2026 00:38:33 +0000 (0:00:00.465) 0:00:12.119 **** 2026-02-04 00:38:38.515091 | orchestrator | changed: [testbed-manager] 2026-02-04 00:38:38.515102 | orchestrator | 2026-02-04 00:38:38.515115 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-04 00:38:38.515127 | orchestrator | Wednesday 04 February 2026 00:38:34 +0000 (0:00:01.253) 0:00:13.372 **** 2026-02-04 00:38:38.515140 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:38:38.515153 | orchestrator | changed: [testbed-manager] 2026-02-04 00:38:38.515166 | orchestrator | 2026-02-04 00:38:38.515179 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-04 00:38:38.515193 | orchestrator | Wednesday 04 February 2026 00:38:35 +0000 (0:00:00.978) 0:00:14.351 **** 2026-02-04 00:38:38.515212 | orchestrator | changed: [testbed-manager] 2026-02-04 00:38:38.515233 | orchestrator | 2026-02-04 00:38:38.515263 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-04 00:38:38.515283 | orchestrator | Wednesday 04 February 2026 00:38:37 +0000 (0:00:01.769) 0:00:16.120 **** 2026-02-04 00:38:38.515302 | orchestrator | changed: [testbed-manager] 2026-02-04 00:38:38.515321 | orchestrator | 2026-02-04 00:38:38.515339 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:38:38.515417 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:38:38.515440 | orchestrator | 2026-02-04 00:38:38.515461 | orchestrator | 2026-02-04 00:38:38.515480 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:38:38.515627 | orchestrator | Wednesday 04 February 2026 00:38:38 +0000 (0:00:00.970) 0:00:17.090 **** 2026-02-04 00:38:38.515686 | orchestrator | =============================================================================== 2026-02-04 00:38:38.515708 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.37s 2026-02-04 00:38:38.515726 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.78s 2026-02-04 00:38:38.515744 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.77s 2026-02-04 00:38:38.515764 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.25s 2026-02-04 00:38:38.515782 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.98s 2026-02-04 00:38:38.515801 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2026-02-04 00:38:38.515820 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.77s 2026-02-04 00:38:38.515838 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2026-02-04 00:38:38.515857 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.47s 2026-02-04 00:38:38.515890 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2026-02-04 00:38:38.515909 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-02-04 00:38:38.841490 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-04 00:38:38.874172 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-04 00:38:38.874274 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-04 00:38:38.950902 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 182 0 --:--:-- --:--:-- --:--:-- 184 2026-02-04 00:38:38.967719 | orchestrator | + osism apply --environment custom workarounds 2026-02-04 00:38:41.094115 | orchestrator | 2026-02-04 00:38:41 | INFO  | Trying to run play workarounds in environment custom 2026-02-04 00:38:51.225407 | orchestrator | 2026-02-04 00:38:51 | INFO  | Prepare task for execution of workarounds. 2026-02-04 00:38:51.299288 | orchestrator | 2026-02-04 00:38:51 | INFO  | Task ad539705-575e-462f-ba7d-7320dc8344df (workarounds) was prepared for execution. 2026-02-04 00:38:51.299357 | orchestrator | 2026-02-04 00:38:51 | INFO  | It takes a moment until task ad539705-575e-462f-ba7d-7320dc8344df (workarounds) has been started and output is visible here. 2026-02-04 00:39:18.330352 | orchestrator | 2026-02-04 00:39:18.330447 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:39:18.330459 | orchestrator | 2026-02-04 00:39:18.330467 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-04 00:39:18.330476 | orchestrator | Wednesday 04 February 2026 00:38:55 +0000 (0:00:00.133) 0:00:00.133 **** 2026-02-04 00:39:18.330484 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-04 00:39:18.330492 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-04 00:39:18.330499 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-04 00:39:18.330507 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-04 00:39:18.330514 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-04 00:39:18.330522 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-04 00:39:18.330529 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-04 00:39:18.330551 | orchestrator | 2026-02-04 00:39:18.330559 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-04 00:39:18.330566 | orchestrator | 2026-02-04 00:39:18.330573 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-04 00:39:18.330581 | orchestrator | Wednesday 04 February 2026 00:38:56 +0000 (0:00:00.885) 0:00:01.019 **** 2026-02-04 00:39:18.330588 | orchestrator | ok: [testbed-manager] 2026-02-04 00:39:18.330597 | orchestrator | 2026-02-04 00:39:18.330604 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-04 00:39:18.330612 | orchestrator | 2026-02-04 00:39:18.330619 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-04 00:39:18.330626 | orchestrator | Wednesday 04 February 2026 00:38:59 +0000 (0:00:02.617) 0:00:03.636 **** 2026-02-04 00:39:18.330633 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:39:18.330641 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:39:18.330675 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:39:18.330684 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:39:18.330691 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:39:18.330698 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:39:18.330705 | orchestrator | 2026-02-04 00:39:18.330712 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-04 00:39:18.330719 | orchestrator | 2026-02-04 00:39:18.330727 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-04 00:39:18.330734 | orchestrator | Wednesday 04 February 2026 00:39:01 +0000 (0:00:01.872) 0:00:05.508 **** 2026-02-04 00:39:18.330742 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:39:18.330750 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:39:18.330758 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:39:18.330765 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:39:18.330772 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:39:18.330780 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:39:18.330787 | orchestrator | 2026-02-04 00:39:18.330794 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-04 00:39:18.330802 | orchestrator | Wednesday 04 February 2026 00:39:02 +0000 (0:00:01.615) 0:00:07.124 **** 2026-02-04 00:39:18.330809 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:39:18.330817 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:39:18.330824 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:39:18.330831 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:39:18.330838 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:39:18.330845 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:39:18.330852 | orchestrator | 2026-02-04 00:39:18.330860 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-04 00:39:18.330867 | orchestrator | Wednesday 04 February 2026 00:39:06 +0000 (0:00:03.822) 0:00:10.947 **** 2026-02-04 00:39:18.330874 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:39:18.330886 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:39:18.330894 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:39:18.330902 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:39:18.330910 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:39:18.330919 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:39:18.330927 | orchestrator | 2026-02-04 00:39:18.330936 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-04 00:39:18.330945 | orchestrator | 2026-02-04 00:39:18.330954 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-04 00:39:18.330968 | orchestrator | Wednesday 04 February 2026 00:39:07 +0000 (0:00:00.870) 0:00:11.817 **** 2026-02-04 00:39:18.330976 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:39:18.330984 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:39:18.330993 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:39:18.331002 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:39:18.331010 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:39:18.331018 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:39:18.331026 | orchestrator | changed: [testbed-manager] 2026-02-04 00:39:18.331034 | orchestrator | 2026-02-04 00:39:18.331042 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-04 00:39:18.331051 | orchestrator | Wednesday 04 February 2026 00:39:09 +0000 (0:00:01.822) 0:00:13.640 **** 2026-02-04 00:39:18.331060 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:39:18.331068 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:39:18.331077 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:39:18.331085 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:39:18.331093 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:39:18.331100 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:39:18.331121 | orchestrator | changed: [testbed-manager] 2026-02-04 00:39:18.331128 | orchestrator | 2026-02-04 00:39:18.331136 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-04 00:39:18.331143 | orchestrator | Wednesday 04 February 2026 00:39:11 +0000 (0:00:01.924) 0:00:15.564 **** 2026-02-04 00:39:18.331150 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:39:18.331158 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:39:18.331165 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:39:18.331172 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:39:18.331179 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:39:18.331186 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:39:18.331193 | orchestrator | ok: [testbed-manager] 2026-02-04 00:39:18.331200 | orchestrator | 2026-02-04 00:39:18.331208 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-04 00:39:18.331215 | orchestrator | Wednesday 04 February 2026 00:39:12 +0000 (0:00:01.688) 0:00:17.252 **** 2026-02-04 00:39:18.331222 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:39:18.331229 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:39:18.331237 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:39:18.331244 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:39:18.331251 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:39:18.331258 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:39:18.331265 | orchestrator | changed: [testbed-manager] 2026-02-04 00:39:18.331272 | orchestrator | 2026-02-04 00:39:18.331279 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-04 00:39:18.331286 | orchestrator | Wednesday 04 February 2026 00:39:14 +0000 (0:00:01.921) 0:00:19.174 **** 2026-02-04 00:39:18.331294 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:39:18.331301 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:39:18.331308 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:39:18.331315 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:39:18.331322 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:39:18.331329 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:39:18.331336 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:39:18.331343 | orchestrator | 2026-02-04 00:39:18.331350 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-04 00:39:18.331358 | orchestrator | 2026-02-04 00:39:18.331365 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-04 00:39:18.331372 | orchestrator | Wednesday 04 February 2026 00:39:15 +0000 (0:00:00.646) 0:00:19.821 **** 2026-02-04 00:39:18.331379 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:39:18.331386 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:39:18.331393 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:39:18.331400 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:39:18.331408 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:39:18.331419 | orchestrator | ok: [testbed-manager] 2026-02-04 00:39:18.331426 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:39:18.331433 | orchestrator | 2026-02-04 00:39:18.331441 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:39:18.331449 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:39:18.331457 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:18.331465 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:18.331472 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:18.331479 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:18.331487 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:18.331494 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:18.331501 | orchestrator | 2026-02-04 00:39:18.331508 | orchestrator | 2026-02-04 00:39:18.331519 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:39:18.331526 | orchestrator | Wednesday 04 February 2026 00:39:18 +0000 (0:00:02.859) 0:00:22.680 **** 2026-02-04 00:39:18.331534 | orchestrator | =============================================================================== 2026-02-04 00:39:18.331541 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.82s 2026-02-04 00:39:18.331548 | orchestrator | Install python3-docker -------------------------------------------------- 2.86s 2026-02-04 00:39:18.331555 | orchestrator | Apply netplan configuration --------------------------------------------- 2.62s 2026-02-04 00:39:18.331562 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.92s 2026-02-04 00:39:18.331569 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.92s 2026-02-04 00:39:18.331576 | orchestrator | Apply netplan configuration --------------------------------------------- 1.87s 2026-02-04 00:39:18.331583 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.82s 2026-02-04 00:39:18.331591 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.69s 2026-02-04 00:39:18.331598 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.62s 2026-02-04 00:39:18.331605 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.89s 2026-02-04 00:39:18.331612 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.87s 2026-02-04 00:39:18.331623 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2026-02-04 00:39:19.066298 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-04 00:39:31.249285 | orchestrator | 2026-02-04 00:39:31 | INFO  | Prepare task for execution of reboot. 2026-02-04 00:39:31.327243 | orchestrator | 2026-02-04 00:39:31 | INFO  | Task 953139d1-e8c0-466d-8bf0-15e7322127f9 (reboot) was prepared for execution. 2026-02-04 00:39:31.327325 | orchestrator | 2026-02-04 00:39:31 | INFO  | It takes a moment until task 953139d1-e8c0-466d-8bf0-15e7322127f9 (reboot) has been started and output is visible here. 2026-02-04 00:39:42.187222 | orchestrator | 2026-02-04 00:39:42.187305 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:39:42.187313 | orchestrator | 2026-02-04 00:39:42.187317 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:39:42.187337 | orchestrator | Wednesday 04 February 2026 00:39:35 +0000 (0:00:00.212) 0:00:00.212 **** 2026-02-04 00:39:42.187341 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:39:42.187346 | orchestrator | 2026-02-04 00:39:42.187350 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:39:42.187354 | orchestrator | Wednesday 04 February 2026 00:39:35 +0000 (0:00:00.109) 0:00:00.322 **** 2026-02-04 00:39:42.187358 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:39:42.187362 | orchestrator | 2026-02-04 00:39:42.187365 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:39:42.187369 | orchestrator | Wednesday 04 February 2026 00:39:36 +0000 (0:00:00.980) 0:00:01.302 **** 2026-02-04 00:39:42.187373 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:39:42.187377 | orchestrator | 2026-02-04 00:39:42.187381 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:39:42.187385 | orchestrator | 2026-02-04 00:39:42.187389 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:39:42.187392 | orchestrator | Wednesday 04 February 2026 00:39:37 +0000 (0:00:00.151) 0:00:01.454 **** 2026-02-04 00:39:42.187396 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:39:42.187400 | orchestrator | 2026-02-04 00:39:42.187403 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:39:42.187407 | orchestrator | Wednesday 04 February 2026 00:39:37 +0000 (0:00:00.105) 0:00:01.559 **** 2026-02-04 00:39:42.187411 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:39:42.187414 | orchestrator | 2026-02-04 00:39:42.187418 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:39:42.187422 | orchestrator | Wednesday 04 February 2026 00:39:37 +0000 (0:00:00.747) 0:00:02.307 **** 2026-02-04 00:39:42.187426 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:39:42.187430 | orchestrator | 2026-02-04 00:39:42.187433 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:39:42.187437 | orchestrator | 2026-02-04 00:39:42.187441 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:39:42.187444 | orchestrator | Wednesday 04 February 2026 00:39:38 +0000 (0:00:00.125) 0:00:02.433 **** 2026-02-04 00:39:42.187448 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:39:42.187452 | orchestrator | 2026-02-04 00:39:42.187455 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:39:42.187459 | orchestrator | Wednesday 04 February 2026 00:39:38 +0000 (0:00:00.219) 0:00:02.652 **** 2026-02-04 00:39:42.187463 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:39:42.187466 | orchestrator | 2026-02-04 00:39:42.187470 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:39:42.187474 | orchestrator | Wednesday 04 February 2026 00:39:38 +0000 (0:00:00.684) 0:00:03.337 **** 2026-02-04 00:39:42.187478 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:39:42.187481 | orchestrator | 2026-02-04 00:39:42.187485 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:39:42.187489 | orchestrator | 2026-02-04 00:39:42.187492 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:39:42.187496 | orchestrator | Wednesday 04 February 2026 00:39:39 +0000 (0:00:00.116) 0:00:03.454 **** 2026-02-04 00:39:42.187500 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:39:42.187503 | orchestrator | 2026-02-04 00:39:42.187507 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:39:42.187522 | orchestrator | Wednesday 04 February 2026 00:39:39 +0000 (0:00:00.106) 0:00:03.561 **** 2026-02-04 00:39:42.187526 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:39:42.187529 | orchestrator | 2026-02-04 00:39:42.187533 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:39:42.187537 | orchestrator | Wednesday 04 February 2026 00:39:39 +0000 (0:00:00.673) 0:00:04.234 **** 2026-02-04 00:39:42.187541 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:39:42.187548 | orchestrator | 2026-02-04 00:39:42.187552 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:39:42.187556 | orchestrator | 2026-02-04 00:39:42.187560 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:39:42.187563 | orchestrator | Wednesday 04 February 2026 00:39:39 +0000 (0:00:00.108) 0:00:04.343 **** 2026-02-04 00:39:42.187567 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:39:42.187571 | orchestrator | 2026-02-04 00:39:42.187575 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:39:42.187578 | orchestrator | Wednesday 04 February 2026 00:39:40 +0000 (0:00:00.101) 0:00:04.444 **** 2026-02-04 00:39:42.187582 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:39:42.187586 | orchestrator | 2026-02-04 00:39:42.187589 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:39:42.187593 | orchestrator | Wednesday 04 February 2026 00:39:40 +0000 (0:00:00.679) 0:00:05.124 **** 2026-02-04 00:39:42.187597 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:39:42.187601 | orchestrator | 2026-02-04 00:39:42.187604 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:39:42.187608 | orchestrator | 2026-02-04 00:39:42.187612 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:39:42.187615 | orchestrator | Wednesday 04 February 2026 00:39:40 +0000 (0:00:00.129) 0:00:05.254 **** 2026-02-04 00:39:42.187619 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:39:42.187623 | orchestrator | 2026-02-04 00:39:42.187626 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:39:42.187630 | orchestrator | Wednesday 04 February 2026 00:39:40 +0000 (0:00:00.135) 0:00:05.389 **** 2026-02-04 00:39:42.187634 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:39:42.187638 | orchestrator | 2026-02-04 00:39:42.187675 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:39:42.187679 | orchestrator | Wednesday 04 February 2026 00:39:41 +0000 (0:00:00.707) 0:00:06.097 **** 2026-02-04 00:39:42.187693 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:39:42.187697 | orchestrator | 2026-02-04 00:39:42.187701 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:39:42.187706 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:42.187710 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:42.187714 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:42.187718 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:42.187722 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:42.187725 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:39:42.187729 | orchestrator | 2026-02-04 00:39:42.187733 | orchestrator | 2026-02-04 00:39:42.187736 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:39:42.187740 | orchestrator | Wednesday 04 February 2026 00:39:41 +0000 (0:00:00.039) 0:00:06.136 **** 2026-02-04 00:39:42.187744 | orchestrator | =============================================================================== 2026-02-04 00:39:42.187748 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.47s 2026-02-04 00:39:42.187752 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.78s 2026-02-04 00:39:42.187759 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.67s 2026-02-04 00:39:42.577222 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-04 00:39:54.786097 | orchestrator | 2026-02-04 00:39:54 | INFO  | Prepare task for execution of wait-for-connection. 2026-02-04 00:39:54.861306 | orchestrator | 2026-02-04 00:39:54 | INFO  | Task 2a34a9cb-6854-4d58-8656-4f1c40809e1c (wait-for-connection) was prepared for execution. 2026-02-04 00:39:54.861379 | orchestrator | 2026-02-04 00:39:54 | INFO  | It takes a moment until task 2a34a9cb-6854-4d58-8656-4f1c40809e1c (wait-for-connection) has been started and output is visible here. 2026-02-04 00:40:11.709541 | orchestrator | 2026-02-04 00:40:11.709695 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-04 00:40:11.709725 | orchestrator | 2026-02-04 00:40:11.709737 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-04 00:40:11.709749 | orchestrator | Wednesday 04 February 2026 00:39:59 +0000 (0:00:00.240) 0:00:00.240 **** 2026-02-04 00:40:11.709761 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:40:11.709774 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:40:11.709785 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:40:11.709796 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:40:11.709807 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:40:11.709837 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:40:11.709849 | orchestrator | 2026-02-04 00:40:11.709860 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:40:11.709872 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:40:11.709886 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:40:11.709897 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:40:11.709908 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:40:11.709919 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:40:11.709930 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:40:11.709941 | orchestrator | 2026-02-04 00:40:11.709953 | orchestrator | 2026-02-04 00:40:11.709964 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:40:11.709975 | orchestrator | Wednesday 04 February 2026 00:40:11 +0000 (0:00:11.676) 0:00:11.916 **** 2026-02-04 00:40:11.709986 | orchestrator | =============================================================================== 2026-02-04 00:40:11.709998 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.68s 2026-02-04 00:40:12.070131 | orchestrator | + osism apply hddtemp 2026-02-04 00:40:24.276930 | orchestrator | 2026-02-04 00:40:24 | INFO  | Prepare task for execution of hddtemp. 2026-02-04 00:40:24.361909 | orchestrator | 2026-02-04 00:40:24 | INFO  | Task 46c211e2-ab0c-4ec2-8c17-07514a012f6e (hddtemp) was prepared for execution. 2026-02-04 00:40:24.362007 | orchestrator | 2026-02-04 00:40:24 | INFO  | It takes a moment until task 46c211e2-ab0c-4ec2-8c17-07514a012f6e (hddtemp) has been started and output is visible here. 2026-02-04 00:40:54.812957 | orchestrator | 2026-02-04 00:40:54.813037 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-04 00:40:54.813046 | orchestrator | 2026-02-04 00:40:54.813052 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-04 00:40:54.813059 | orchestrator | Wednesday 04 February 2026 00:40:29 +0000 (0:00:00.275) 0:00:00.275 **** 2026-02-04 00:40:54.813081 | orchestrator | ok: [testbed-manager] 2026-02-04 00:40:54.813088 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:40:54.813093 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:40:54.813099 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:40:54.813105 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:40:54.813111 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:40:54.813116 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:40:54.813121 | orchestrator | 2026-02-04 00:40:54.813127 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-04 00:40:54.813132 | orchestrator | Wednesday 04 February 2026 00:40:29 +0000 (0:00:00.771) 0:00:01.047 **** 2026-02-04 00:40:54.813140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:40:54.813148 | orchestrator | 2026-02-04 00:40:54.813153 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-04 00:40:54.813159 | orchestrator | Wednesday 04 February 2026 00:40:31 +0000 (0:00:01.257) 0:00:02.305 **** 2026-02-04 00:40:54.813164 | orchestrator | ok: [testbed-manager] 2026-02-04 00:40:54.813169 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:40:54.813175 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:40:54.813180 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:40:54.813185 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:40:54.813191 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:40:54.813196 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:40:54.813201 | orchestrator | 2026-02-04 00:40:54.813206 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-04 00:40:54.813212 | orchestrator | Wednesday 04 February 2026 00:40:33 +0000 (0:00:02.224) 0:00:04.529 **** 2026-02-04 00:40:54.813217 | orchestrator | changed: [testbed-manager] 2026-02-04 00:40:54.813224 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:40:54.813229 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:40:54.813234 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:40:54.813240 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:40:54.813245 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:40:54.813250 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:40:54.813255 | orchestrator | 2026-02-04 00:40:54.813261 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-04 00:40:54.813266 | orchestrator | Wednesday 04 February 2026 00:40:34 +0000 (0:00:01.291) 0:00:05.821 **** 2026-02-04 00:40:54.813272 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:40:54.813277 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:40:54.813282 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:40:54.813288 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:40:54.813293 | orchestrator | ok: [testbed-manager] 2026-02-04 00:40:54.813298 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:40:54.813303 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:40:54.813309 | orchestrator | 2026-02-04 00:40:54.813314 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-04 00:40:54.813319 | orchestrator | Wednesday 04 February 2026 00:40:35 +0000 (0:00:01.223) 0:00:07.045 **** 2026-02-04 00:40:54.813325 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:40:54.813330 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:40:54.813335 | orchestrator | changed: [testbed-manager] 2026-02-04 00:40:54.813345 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:40:54.813350 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:54.813356 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:54.813361 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:40:54.813366 | orchestrator | 2026-02-04 00:40:54.813372 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-04 00:40:54.813377 | orchestrator | Wednesday 04 February 2026 00:40:36 +0000 (0:00:00.877) 0:00:07.922 **** 2026-02-04 00:40:54.813382 | orchestrator | changed: [testbed-manager] 2026-02-04 00:40:54.813391 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:40:54.813397 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:40:54.813402 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:40:54.813408 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:40:54.813413 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:40:54.813418 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:40:54.813423 | orchestrator | 2026-02-04 00:40:54.813429 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-04 00:40:54.813434 | orchestrator | Wednesday 04 February 2026 00:40:51 +0000 (0:00:14.379) 0:00:22.302 **** 2026-02-04 00:40:54.813440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:40:54.813445 | orchestrator | 2026-02-04 00:40:54.813450 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-04 00:40:54.813456 | orchestrator | Wednesday 04 February 2026 00:40:52 +0000 (0:00:01.253) 0:00:23.556 **** 2026-02-04 00:40:54.813461 | orchestrator | changed: [testbed-manager] 2026-02-04 00:40:54.813466 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:40:54.813472 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:40:54.813477 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:40:54.813482 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:40:54.813487 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:40:54.813493 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:40:54.813498 | orchestrator | 2026-02-04 00:40:54.813504 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:40:54.813509 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:40:54.813526 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:40:54.813533 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:40:54.813540 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:40:54.813546 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:40:54.813552 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:40:54.813559 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:40:54.813565 | orchestrator | 2026-02-04 00:40:54.813571 | orchestrator | 2026-02-04 00:40:54.813577 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:40:54.813583 | orchestrator | Wednesday 04 February 2026 00:40:54 +0000 (0:00:01.939) 0:00:25.496 **** 2026-02-04 00:40:54.813589 | orchestrator | =============================================================================== 2026-02-04 00:40:54.813596 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.38s 2026-02-04 00:40:54.813602 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.22s 2026-02-04 00:40:54.813608 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.94s 2026-02-04 00:40:54.813614 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.29s 2026-02-04 00:40:54.813729 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.26s 2026-02-04 00:40:54.813737 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.25s 2026-02-04 00:40:54.813749 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.22s 2026-02-04 00:40:54.813755 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.88s 2026-02-04 00:40:54.813760 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.77s 2026-02-04 00:40:55.164591 | orchestrator | ++ semver latest 7.1.1 2026-02-04 00:40:55.230097 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 00:40:55.230190 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-04 00:40:55.230205 | orchestrator | + sudo systemctl restart manager.service 2026-02-04 00:41:08.386409 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-04 00:41:08.386513 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-04 00:41:08.386530 | orchestrator | + local max_attempts=60 2026-02-04 00:41:08.386544 | orchestrator | + local name=ceph-ansible 2026-02-04 00:41:08.386556 | orchestrator | + local attempt_num=1 2026-02-04 00:41:08.386568 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:41:08.423644 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:41:08.423743 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:41:08.423759 | orchestrator | + sleep 5 2026-02-04 00:41:13.428170 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:41:13.454574 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:41:13.454690 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:41:13.454699 | orchestrator | + sleep 5 2026-02-04 00:41:18.459227 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:41:18.490302 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:41:18.490396 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:41:18.490412 | orchestrator | + sleep 5 2026-02-04 00:41:23.494209 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:41:23.537097 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:41:23.537292 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:41:23.537315 | orchestrator | + sleep 5 2026-02-04 00:41:28.542697 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:41:28.570289 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:41:28.570383 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:41:28.570398 | orchestrator | + sleep 5 2026-02-04 00:41:33.574582 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:41:33.613909 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:41:33.613999 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:41:33.614012 | orchestrator | + sleep 5 2026-02-04 00:41:38.618248 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:41:38.655944 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:41:38.656049 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:41:38.656065 | orchestrator | + sleep 5 2026-02-04 00:41:43.659395 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:41:43.695348 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:41:43.695427 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:41:43.695436 | orchestrator | + sleep 5 2026-02-04 00:41:48.698708 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:41:48.866087 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:41:48.866192 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:41:48.866208 | orchestrator | + sleep 5 2026-02-04 00:41:53.870138 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:41:53.908895 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:41:53.908973 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:41:53.908982 | orchestrator | + sleep 5 2026-02-04 00:41:58.913127 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:41:58.953943 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:41:58.954115 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:41:58.954133 | orchestrator | + sleep 5 2026-02-04 00:42:03.958328 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:42:03.989643 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:42:03.989746 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:42:03.989806 | orchestrator | + sleep 5 2026-02-04 00:42:08.995462 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:42:09.035054 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:42:09.035115 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:42:09.035121 | orchestrator | + sleep 5 2026-02-04 00:42:14.039002 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:42:14.085462 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:42:14.085556 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-04 00:42:14.085572 | orchestrator | + local max_attempts=60 2026-02-04 00:42:14.085585 | orchestrator | + local name=kolla-ansible 2026-02-04 00:42:14.085636 | orchestrator | + local attempt_num=1 2026-02-04 00:42:14.085648 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-04 00:42:14.124119 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:42:14.124228 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-04 00:42:14.124253 | orchestrator | + local max_attempts=60 2026-02-04 00:42:14.124273 | orchestrator | + local name=osism-ansible 2026-02-04 00:42:14.124291 | orchestrator | + local attempt_num=1 2026-02-04 00:42:14.124311 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-04 00:42:14.151142 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:42:14.151239 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-04 00:42:14.151254 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-04 00:42:14.315128 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-04 00:42:14.457985 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-04 00:42:14.618679 | orchestrator | ARA in osism-ansible already disabled. 2026-02-04 00:42:14.771554 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-04 00:42:14.772848 | orchestrator | + osism apply gather-facts 2026-02-04 00:42:27.069563 | orchestrator | 2026-02-04 00:42:27 | INFO  | Prepare task for execution of gather-facts. 2026-02-04 00:42:27.141216 | orchestrator | 2026-02-04 00:42:27 | INFO  | Task 390b544d-0ba6-49d2-a318-1c8464a27849 (gather-facts) was prepared for execution. 2026-02-04 00:42:27.141302 | orchestrator | 2026-02-04 00:42:27 | INFO  | It takes a moment until task 390b544d-0ba6-49d2-a318-1c8464a27849 (gather-facts) has been started and output is visible here. 2026-02-04 00:42:41.889984 | orchestrator | 2026-02-04 00:42:41.890112 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 00:42:41.890123 | orchestrator | 2026-02-04 00:42:41.890132 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:42:41.890143 | orchestrator | Wednesday 04 February 2026 00:42:31 +0000 (0:00:00.229) 0:00:00.229 **** 2026-02-04 00:42:41.890153 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:42:41.890165 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:42:41.890188 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:42:41.890203 | orchestrator | ok: [testbed-manager] 2026-02-04 00:42:41.890214 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:42:41.890224 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:42:41.890234 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:42:41.890244 | orchestrator | 2026-02-04 00:42:41.890253 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-04 00:42:41.890263 | orchestrator | 2026-02-04 00:42:41.890273 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-04 00:42:41.890284 | orchestrator | Wednesday 04 February 2026 00:42:40 +0000 (0:00:09.220) 0:00:09.450 **** 2026-02-04 00:42:41.890294 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:42:41.890305 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:42:41.890316 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:42:41.890326 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:42:41.890337 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:41.890346 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:41.890356 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:42:41.890367 | orchestrator | 2026-02-04 00:42:41.890378 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:42:41.890416 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:42:41.890428 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:42:41.890439 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:42:41.890467 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:42:41.890478 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:42:41.890488 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:42:41.890499 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:42:41.890509 | orchestrator | 2026-02-04 00:42:41.890521 | orchestrator | 2026-02-04 00:42:41.890531 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:42:41.890542 | orchestrator | Wednesday 04 February 2026 00:42:41 +0000 (0:00:00.568) 0:00:10.018 **** 2026-02-04 00:42:41.890552 | orchestrator | =============================================================================== 2026-02-04 00:42:41.890559 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.22s 2026-02-04 00:42:41.890566 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-02-04 00:42:42.268694 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-04 00:42:42.283463 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-04 00:42:42.298968 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-04 00:42:42.314174 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-04 00:42:42.326640 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-04 00:42:42.338945 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-04 00:42:42.352779 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-04 00:42:42.366970 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-04 00:42:42.386783 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-04 00:42:42.398737 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-04 00:42:42.415770 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-04 00:42:42.437745 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-04 00:42:42.453306 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-04 00:42:42.470292 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-04 00:42:42.486264 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-04 00:42:42.503204 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-04 00:42:42.517068 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-04 00:42:42.536057 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-04 00:42:42.554845 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-04 00:42:42.570708 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-04 00:42:42.591102 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-04 00:42:42.604224 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-04 00:42:42.621007 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-04 00:42:42.634297 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-04 00:42:42.833882 | orchestrator | ok: Runtime: 0:25:11.716796 2026-02-04 00:42:42.940263 | 2026-02-04 00:42:42.940407 | TASK [Deploy services] 2026-02-04 00:42:43.476515 | orchestrator | skipping: Conditional result was False 2026-02-04 00:42:43.494528 | 2026-02-04 00:42:43.494699 | TASK [Deploy in a nutshell] 2026-02-04 00:42:44.203321 | orchestrator | 2026-02-04 00:42:44.203507 | orchestrator | # PULL IMAGES 2026-02-04 00:42:44.203530 | orchestrator | 2026-02-04 00:42:44.203545 | orchestrator | + set -e 2026-02-04 00:42:44.203563 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 00:42:44.203616 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 00:42:44.203633 | orchestrator | ++ INTERACTIVE=false 2026-02-04 00:42:44.203678 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 00:42:44.203701 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 00:42:44.203716 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 00:42:44.203728 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 00:42:44.203747 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 00:42:44.203759 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 00:42:44.203777 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 00:42:44.203790 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 00:42:44.203809 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 00:42:44.203820 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-04 00:42:44.203835 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-04 00:42:44.203847 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 00:42:44.203860 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 00:42:44.203871 | orchestrator | ++ export ARA=false 2026-02-04 00:42:44.203882 | orchestrator | ++ ARA=false 2026-02-04 00:42:44.203894 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 00:42:44.203945 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 00:42:44.203957 | orchestrator | ++ export TEMPEST=true 2026-02-04 00:42:44.203968 | orchestrator | ++ TEMPEST=true 2026-02-04 00:42:44.203980 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 00:42:44.203991 | orchestrator | ++ IS_ZUUL=true 2026-02-04 00:42:44.204002 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.33 2026-02-04 00:42:44.204014 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.33 2026-02-04 00:42:44.204025 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 00:42:44.204036 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 00:42:44.204047 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 00:42:44.204059 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 00:42:44.204070 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 00:42:44.204081 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 00:42:44.204093 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 00:42:44.204104 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 00:42:44.204115 | orchestrator | + echo 2026-02-04 00:42:44.204127 | orchestrator | + echo '# PULL IMAGES' 2026-02-04 00:42:44.204138 | orchestrator | + echo 2026-02-04 00:42:44.204164 | orchestrator | ++ semver latest 7.0.0 2026-02-04 00:42:44.255376 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 00:42:44.255491 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-04 00:42:44.255509 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-04 00:42:46.378426 | orchestrator | 2026-02-04 00:42:46 | INFO  | Trying to run play pull-images in environment custom 2026-02-04 00:42:56.390490 | orchestrator | 2026-02-04 00:42:56 | INFO  | Prepare task for execution of pull-images. 2026-02-04 00:42:56.458795 | orchestrator | 2026-02-04 00:42:56 | INFO  | Task 996572eb-c773-4496-94f3-b6b4980de698 (pull-images) was prepared for execution. 2026-02-04 00:42:56.458881 | orchestrator | 2026-02-04 00:42:56 | INFO  | Task 996572eb-c773-4496-94f3-b6b4980de698 is running in background. No more output. Check ARA for logs. 2026-02-04 00:42:59.028387 | orchestrator | 2026-02-04 00:42:59 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-04 00:43:09.099330 | orchestrator | 2026-02-04 00:43:09 | INFO  | Prepare task for execution of wipe-partitions. 2026-02-04 00:43:09.188875 | orchestrator | 2026-02-04 00:43:09 | INFO  | Task f5517c4e-ec1a-49a1-98cf-2aa37856fa76 (wipe-partitions) was prepared for execution. 2026-02-04 00:43:09.188972 | orchestrator | 2026-02-04 00:43:09 | INFO  | It takes a moment until task f5517c4e-ec1a-49a1-98cf-2aa37856fa76 (wipe-partitions) has been started and output is visible here. 2026-02-04 00:43:23.556717 | orchestrator | 2026-02-04 00:43:23.556838 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-04 00:43:23.556855 | orchestrator | 2026-02-04 00:43:23.556867 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-04 00:43:23.556884 | orchestrator | Wednesday 04 February 2026 00:43:14 +0000 (0:00:00.133) 0:00:00.133 **** 2026-02-04 00:43:23.556924 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:43:23.556939 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:43:23.556952 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:43:23.556963 | orchestrator | 2026-02-04 00:43:23.556975 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-04 00:43:23.556987 | orchestrator | Wednesday 04 February 2026 00:43:15 +0000 (0:00:00.606) 0:00:00.739 **** 2026-02-04 00:43:23.557002 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:43:23.557014 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:43:23.557025 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:23.557036 | orchestrator | 2026-02-04 00:43:23.557048 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-04 00:43:23.557059 | orchestrator | Wednesday 04 February 2026 00:43:15 +0000 (0:00:00.422) 0:00:01.162 **** 2026-02-04 00:43:23.557070 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:43:23.557083 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:43:23.557094 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:23.557105 | orchestrator | 2026-02-04 00:43:23.557116 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-04 00:43:23.557127 | orchestrator | Wednesday 04 February 2026 00:43:16 +0000 (0:00:00.580) 0:00:01.742 **** 2026-02-04 00:43:23.557139 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:43:23.557150 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:43:23.557161 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:23.557172 | orchestrator | 2026-02-04 00:43:23.557184 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-04 00:43:23.557197 | orchestrator | Wednesday 04 February 2026 00:43:16 +0000 (0:00:00.256) 0:00:01.998 **** 2026-02-04 00:43:23.557211 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-04 00:43:23.557228 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-04 00:43:23.557241 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-04 00:43:23.557254 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-04 00:43:23.557267 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-04 00:43:23.557279 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-04 00:43:23.557293 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-04 00:43:23.557305 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-04 00:43:23.557318 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-04 00:43:23.557331 | orchestrator | 2026-02-04 00:43:23.557345 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-04 00:43:23.557358 | orchestrator | Wednesday 04 February 2026 00:43:17 +0000 (0:00:01.303) 0:00:03.303 **** 2026-02-04 00:43:23.557372 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-04 00:43:23.557384 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-04 00:43:23.557397 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-04 00:43:23.557409 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-04 00:43:23.557422 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-04 00:43:23.557434 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-04 00:43:23.557447 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-04 00:43:23.557459 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-04 00:43:23.557472 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-04 00:43:23.557485 | orchestrator | 2026-02-04 00:43:23.557499 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-04 00:43:23.557511 | orchestrator | Wednesday 04 February 2026 00:43:19 +0000 (0:00:01.632) 0:00:04.935 **** 2026-02-04 00:43:23.557525 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-04 00:43:23.557538 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-04 00:43:23.557551 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-04 00:43:23.557615 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-04 00:43:23.557637 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-04 00:43:23.557649 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-04 00:43:23.557660 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-04 00:43:23.557671 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-04 00:43:23.557682 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-04 00:43:23.557694 | orchestrator | 2026-02-04 00:43:23.557705 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-04 00:43:23.557716 | orchestrator | Wednesday 04 February 2026 00:43:21 +0000 (0:00:02.177) 0:00:07.112 **** 2026-02-04 00:43:23.557727 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:43:23.557738 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:43:23.557750 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:43:23.557761 | orchestrator | 2026-02-04 00:43:23.557772 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-04 00:43:23.557783 | orchestrator | Wednesday 04 February 2026 00:43:22 +0000 (0:00:00.664) 0:00:07.777 **** 2026-02-04 00:43:23.557795 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:43:23.557821 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:43:23.557832 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:43:23.557844 | orchestrator | 2026-02-04 00:43:23.557856 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:43:23.557868 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:23.557881 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:23.557910 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:23.557922 | orchestrator | 2026-02-04 00:43:23.557933 | orchestrator | 2026-02-04 00:43:23.557945 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:43:23.557956 | orchestrator | Wednesday 04 February 2026 00:43:23 +0000 (0:00:00.683) 0:00:08.461 **** 2026-02-04 00:43:23.557967 | orchestrator | =============================================================================== 2026-02-04 00:43:23.557978 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.18s 2026-02-04 00:43:23.557989 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.63s 2026-02-04 00:43:23.558001 | orchestrator | Check device availability ----------------------------------------------- 1.30s 2026-02-04 00:43:23.558012 | orchestrator | Request device events from the kernel ----------------------------------- 0.68s 2026-02-04 00:43:23.558079 | orchestrator | Reload udev rules ------------------------------------------------------- 0.66s 2026-02-04 00:43:23.558091 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.61s 2026-02-04 00:43:23.558102 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.58s 2026-02-04 00:43:23.558113 | orchestrator | Remove all rook related logical devices --------------------------------- 0.42s 2026-02-04 00:43:23.558125 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-02-04 00:43:36.093041 | orchestrator | 2026-02-04 00:43:36 | INFO  | Prepare task for execution of facts. 2026-02-04 00:43:36.157641 | orchestrator | 2026-02-04 00:43:36 | INFO  | Task d58cd80b-5c05-40f0-8054-1f2b1db75e43 (facts) was prepared for execution. 2026-02-04 00:43:36.157742 | orchestrator | 2026-02-04 00:43:36 | INFO  | It takes a moment until task d58cd80b-5c05-40f0-8054-1f2b1db75e43 (facts) has been started and output is visible here. 2026-02-04 00:43:50.127750 | orchestrator | 2026-02-04 00:43:50.127891 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-04 00:43:50.127918 | orchestrator | 2026-02-04 00:43:50.127975 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-04 00:43:50.127995 | orchestrator | Wednesday 04 February 2026 00:43:40 +0000 (0:00:00.310) 0:00:00.310 **** 2026-02-04 00:43:50.128014 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:43:50.128036 | orchestrator | ok: [testbed-manager] 2026-02-04 00:43:50.128054 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:43:50.128073 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:43:50.128091 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:43:50.128110 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:43:50.128128 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:50.128146 | orchestrator | 2026-02-04 00:43:50.128185 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-04 00:43:50.128204 | orchestrator | Wednesday 04 February 2026 00:43:41 +0000 (0:00:01.148) 0:00:01.458 **** 2026-02-04 00:43:50.128223 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:43:50.128242 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:43:50.128262 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:43:50.128282 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:43:50.128302 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:43:50.128322 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:43:50.128343 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:50.128363 | orchestrator | 2026-02-04 00:43:50.128383 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 00:43:50.128403 | orchestrator | 2026-02-04 00:43:50.128423 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:43:50.128444 | orchestrator | Wednesday 04 February 2026 00:43:43 +0000 (0:00:01.425) 0:00:02.884 **** 2026-02-04 00:43:50.128464 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:43:50.128484 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:43:50.128504 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:43:50.128524 | orchestrator | ok: [testbed-manager] 2026-02-04 00:43:50.128543 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:43:50.128621 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:50.128643 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:43:50.128663 | orchestrator | 2026-02-04 00:43:50.128681 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-04 00:43:50.128700 | orchestrator | 2026-02-04 00:43:50.128719 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-04 00:43:50.128739 | orchestrator | Wednesday 04 February 2026 00:43:49 +0000 (0:00:05.811) 0:00:08.695 **** 2026-02-04 00:43:50.128758 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:43:50.128776 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:43:50.128795 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:43:50.128813 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:43:50.128830 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:43:50.128849 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:43:50.128867 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:50.128886 | orchestrator | 2026-02-04 00:43:50.128905 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:43:50.128923 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:50.128944 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:50.128963 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:50.128981 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:50.128999 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:50.129040 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:50.129059 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:50.129078 | orchestrator | 2026-02-04 00:43:50.129097 | orchestrator | 2026-02-04 00:43:50.129115 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:43:50.129134 | orchestrator | Wednesday 04 February 2026 00:43:49 +0000 (0:00:00.574) 0:00:09.270 **** 2026-02-04 00:43:50.129152 | orchestrator | =============================================================================== 2026-02-04 00:43:50.129170 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.81s 2026-02-04 00:43:50.129189 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.43s 2026-02-04 00:43:50.129206 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2026-02-04 00:43:50.129225 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-02-04 00:43:52.673016 | orchestrator | 2026-02-04 00:43:52 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-02-04 00:43:52.738834 | orchestrator | 2026-02-04 00:43:52 | INFO  | Task 97b49e0b-b739-4005-90c5-480154e40be9 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-04 00:43:52.738940 | orchestrator | 2026-02-04 00:43:52 | INFO  | It takes a moment until task 97b49e0b-b739-4005-90c5-480154e40be9 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-04 00:44:05.747904 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 00:44:05.748009 | orchestrator | 2.16.14 2026-02-04 00:44:05.748023 | orchestrator | 2026-02-04 00:44:05.748042 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-04 00:44:05.748052 | orchestrator | 2026-02-04 00:44:05.748061 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:44:05.748070 | orchestrator | Wednesday 04 February 2026 00:43:57 +0000 (0:00:00.362) 0:00:00.362 **** 2026-02-04 00:44:05.748079 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 00:44:05.748087 | orchestrator | 2026-02-04 00:44:05.748095 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:44:05.748103 | orchestrator | Wednesday 04 February 2026 00:43:58 +0000 (0:00:00.252) 0:00:00.615 **** 2026-02-04 00:44:05.748112 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:44:05.748122 | orchestrator | 2026-02-04 00:44:05.748130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748139 | orchestrator | Wednesday 04 February 2026 00:43:58 +0000 (0:00:00.225) 0:00:00.840 **** 2026-02-04 00:44:05.748147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-04 00:44:05.748155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-04 00:44:05.748163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-04 00:44:05.748171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-04 00:44:05.748179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-04 00:44:05.748187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-04 00:44:05.748195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-04 00:44:05.748203 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-04 00:44:05.748212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-04 00:44:05.748220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-04 00:44:05.748248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-04 00:44:05.748257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-04 00:44:05.748265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-04 00:44:05.748272 | orchestrator | 2026-02-04 00:44:05.748280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748288 | orchestrator | Wednesday 04 February 2026 00:43:58 +0000 (0:00:00.532) 0:00:01.373 **** 2026-02-04 00:44:05.748297 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.748305 | orchestrator | 2026-02-04 00:44:05.748313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748321 | orchestrator | Wednesday 04 February 2026 00:43:59 +0000 (0:00:00.210) 0:00:01.584 **** 2026-02-04 00:44:05.748330 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.748338 | orchestrator | 2026-02-04 00:44:05.748346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748357 | orchestrator | Wednesday 04 February 2026 00:43:59 +0000 (0:00:00.193) 0:00:01.777 **** 2026-02-04 00:44:05.748366 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.748374 | orchestrator | 2026-02-04 00:44:05.748382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748390 | orchestrator | Wednesday 04 February 2026 00:43:59 +0000 (0:00:00.221) 0:00:01.999 **** 2026-02-04 00:44:05.748398 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.748407 | orchestrator | 2026-02-04 00:44:05.748415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748423 | orchestrator | Wednesday 04 February 2026 00:43:59 +0000 (0:00:00.226) 0:00:02.225 **** 2026-02-04 00:44:05.748431 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.748439 | orchestrator | 2026-02-04 00:44:05.748447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748455 | orchestrator | Wednesday 04 February 2026 00:43:59 +0000 (0:00:00.207) 0:00:02.432 **** 2026-02-04 00:44:05.748463 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.748471 | orchestrator | 2026-02-04 00:44:05.748479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748487 | orchestrator | Wednesday 04 February 2026 00:44:00 +0000 (0:00:00.211) 0:00:02.644 **** 2026-02-04 00:44:05.748496 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.748504 | orchestrator | 2026-02-04 00:44:05.748512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748520 | orchestrator | Wednesday 04 February 2026 00:44:00 +0000 (0:00:00.222) 0:00:02.867 **** 2026-02-04 00:44:05.748528 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.748536 | orchestrator | 2026-02-04 00:44:05.748545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748553 | orchestrator | Wednesday 04 February 2026 00:44:00 +0000 (0:00:00.196) 0:00:03.063 **** 2026-02-04 00:44:05.748583 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6) 2026-02-04 00:44:05.748593 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6) 2026-02-04 00:44:05.748601 | orchestrator | 2026-02-04 00:44:05.748609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748631 | orchestrator | Wednesday 04 February 2026 00:44:00 +0000 (0:00:00.443) 0:00:03.507 **** 2026-02-04 00:44:05.748639 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52) 2026-02-04 00:44:05.748648 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52) 2026-02-04 00:44:05.748656 | orchestrator | 2026-02-04 00:44:05.748664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748679 | orchestrator | Wednesday 04 February 2026 00:44:01 +0000 (0:00:00.689) 0:00:04.197 **** 2026-02-04 00:44:05.748687 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd) 2026-02-04 00:44:05.748696 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd) 2026-02-04 00:44:05.748704 | orchestrator | 2026-02-04 00:44:05.748712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748720 | orchestrator | Wednesday 04 February 2026 00:44:02 +0000 (0:00:00.809) 0:00:05.006 **** 2026-02-04 00:44:05.748728 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089) 2026-02-04 00:44:05.748736 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089) 2026-02-04 00:44:05.748744 | orchestrator | 2026-02-04 00:44:05.748752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:05.748761 | orchestrator | Wednesday 04 February 2026 00:44:03 +0000 (0:00:00.952) 0:00:05.958 **** 2026-02-04 00:44:05.748769 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:44:05.748781 | orchestrator | 2026-02-04 00:44:05.748794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:05.748807 | orchestrator | Wednesday 04 February 2026 00:44:03 +0000 (0:00:00.356) 0:00:06.315 **** 2026-02-04 00:44:05.748836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-04 00:44:05.748852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-04 00:44:05.748866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-04 00:44:05.748881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-04 00:44:05.748896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-04 00:44:05.748906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-04 00:44:05.748914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-04 00:44:05.748922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-04 00:44:05.748930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-04 00:44:05.748938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-04 00:44:05.748946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-04 00:44:05.748954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-04 00:44:05.748962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-04 00:44:05.748970 | orchestrator | 2026-02-04 00:44:05.748979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:05.748986 | orchestrator | Wednesday 04 February 2026 00:44:04 +0000 (0:00:00.434) 0:00:06.749 **** 2026-02-04 00:44:05.748999 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.749012 | orchestrator | 2026-02-04 00:44:05.749027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:05.749040 | orchestrator | Wednesday 04 February 2026 00:44:04 +0000 (0:00:00.228) 0:00:06.978 **** 2026-02-04 00:44:05.749052 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.749060 | orchestrator | 2026-02-04 00:44:05.749068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:05.749076 | orchestrator | Wednesday 04 February 2026 00:44:04 +0000 (0:00:00.225) 0:00:07.203 **** 2026-02-04 00:44:05.749084 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.749097 | orchestrator | 2026-02-04 00:44:05.749105 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:05.749113 | orchestrator | Wednesday 04 February 2026 00:44:04 +0000 (0:00:00.219) 0:00:07.423 **** 2026-02-04 00:44:05.749122 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.749130 | orchestrator | 2026-02-04 00:44:05.749142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:05.749154 | orchestrator | Wednesday 04 February 2026 00:44:05 +0000 (0:00:00.224) 0:00:07.648 **** 2026-02-04 00:44:05.749167 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.749179 | orchestrator | 2026-02-04 00:44:05.749198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:05.749211 | orchestrator | Wednesday 04 February 2026 00:44:05 +0000 (0:00:00.209) 0:00:07.858 **** 2026-02-04 00:44:05.749224 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.749239 | orchestrator | 2026-02-04 00:44:05.749252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:05.749266 | orchestrator | Wednesday 04 February 2026 00:44:05 +0000 (0:00:00.224) 0:00:08.083 **** 2026-02-04 00:44:05.749280 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:05.749293 | orchestrator | 2026-02-04 00:44:05.749314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:14.252222 | orchestrator | Wednesday 04 February 2026 00:44:05 +0000 (0:00:00.228) 0:00:08.312 **** 2026-02-04 00:44:14.252328 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.252343 | orchestrator | 2026-02-04 00:44:14.252355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:14.252365 | orchestrator | Wednesday 04 February 2026 00:44:05 +0000 (0:00:00.228) 0:00:08.540 **** 2026-02-04 00:44:14.252376 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-04 00:44:14.252387 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-04 00:44:14.252398 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-04 00:44:14.252408 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-04 00:44:14.252418 | orchestrator | 2026-02-04 00:44:14.252428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:14.252438 | orchestrator | Wednesday 04 February 2026 00:44:07 +0000 (0:00:01.145) 0:00:09.685 **** 2026-02-04 00:44:14.252448 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.252458 | orchestrator | 2026-02-04 00:44:14.252468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:14.252477 | orchestrator | Wednesday 04 February 2026 00:44:07 +0000 (0:00:00.216) 0:00:09.902 **** 2026-02-04 00:44:14.252487 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.252497 | orchestrator | 2026-02-04 00:44:14.252507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:14.252517 | orchestrator | Wednesday 04 February 2026 00:44:07 +0000 (0:00:00.205) 0:00:10.107 **** 2026-02-04 00:44:14.252527 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.252537 | orchestrator | 2026-02-04 00:44:14.252547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:14.252585 | orchestrator | Wednesday 04 February 2026 00:44:07 +0000 (0:00:00.214) 0:00:10.322 **** 2026-02-04 00:44:14.252603 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.252613 | orchestrator | 2026-02-04 00:44:14.252623 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-04 00:44:14.252632 | orchestrator | Wednesday 04 February 2026 00:44:07 +0000 (0:00:00.222) 0:00:10.545 **** 2026-02-04 00:44:14.252642 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-04 00:44:14.252652 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-04 00:44:14.252662 | orchestrator | 2026-02-04 00:44:14.252672 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-04 00:44:14.252682 | orchestrator | Wednesday 04 February 2026 00:44:08 +0000 (0:00:00.217) 0:00:10.763 **** 2026-02-04 00:44:14.252718 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.252729 | orchestrator | 2026-02-04 00:44:14.252739 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-04 00:44:14.252749 | orchestrator | Wednesday 04 February 2026 00:44:08 +0000 (0:00:00.154) 0:00:10.917 **** 2026-02-04 00:44:14.252759 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.252771 | orchestrator | 2026-02-04 00:44:14.252785 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-04 00:44:14.252796 | orchestrator | Wednesday 04 February 2026 00:44:08 +0000 (0:00:00.159) 0:00:11.076 **** 2026-02-04 00:44:14.252808 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.252820 | orchestrator | 2026-02-04 00:44:14.252831 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-04 00:44:14.252843 | orchestrator | Wednesday 04 February 2026 00:44:08 +0000 (0:00:00.124) 0:00:11.201 **** 2026-02-04 00:44:14.252855 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:44:14.252866 | orchestrator | 2026-02-04 00:44:14.252878 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-04 00:44:14.252889 | orchestrator | Wednesday 04 February 2026 00:44:08 +0000 (0:00:00.147) 0:00:11.349 **** 2026-02-04 00:44:14.252902 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cab1220b-9ff6-5009-b197-fa753e4036d2'}}) 2026-02-04 00:44:14.252914 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4adee4b4-d62b-5502-a742-8ac6c3138b01'}}) 2026-02-04 00:44:14.252926 | orchestrator | 2026-02-04 00:44:14.252937 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-04 00:44:14.252949 | orchestrator | Wednesday 04 February 2026 00:44:08 +0000 (0:00:00.189) 0:00:11.538 **** 2026-02-04 00:44:14.252960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cab1220b-9ff6-5009-b197-fa753e4036d2'}})  2026-02-04 00:44:14.252982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4adee4b4-d62b-5502-a742-8ac6c3138b01'}})  2026-02-04 00:44:14.252992 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.253002 | orchestrator | 2026-02-04 00:44:14.253012 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-04 00:44:14.253021 | orchestrator | Wednesday 04 February 2026 00:44:09 +0000 (0:00:00.159) 0:00:11.698 **** 2026-02-04 00:44:14.253031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cab1220b-9ff6-5009-b197-fa753e4036d2'}})  2026-02-04 00:44:14.253041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4adee4b4-d62b-5502-a742-8ac6c3138b01'}})  2026-02-04 00:44:14.253051 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.253060 | orchestrator | 2026-02-04 00:44:14.253070 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-04 00:44:14.253080 | orchestrator | Wednesday 04 February 2026 00:44:09 +0000 (0:00:00.367) 0:00:12.066 **** 2026-02-04 00:44:14.253089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cab1220b-9ff6-5009-b197-fa753e4036d2'}})  2026-02-04 00:44:14.253116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4adee4b4-d62b-5502-a742-8ac6c3138b01'}})  2026-02-04 00:44:14.253127 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.253136 | orchestrator | 2026-02-04 00:44:14.253146 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-04 00:44:14.253156 | orchestrator | Wednesday 04 February 2026 00:44:09 +0000 (0:00:00.172) 0:00:12.238 **** 2026-02-04 00:44:14.253166 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:44:14.253175 | orchestrator | 2026-02-04 00:44:14.253185 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-04 00:44:14.253195 | orchestrator | Wednesday 04 February 2026 00:44:09 +0000 (0:00:00.173) 0:00:12.412 **** 2026-02-04 00:44:14.253204 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:44:14.253221 | orchestrator | 2026-02-04 00:44:14.253231 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-04 00:44:14.253240 | orchestrator | Wednesday 04 February 2026 00:44:09 +0000 (0:00:00.158) 0:00:12.570 **** 2026-02-04 00:44:14.253250 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.253260 | orchestrator | 2026-02-04 00:44:14.253280 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-04 00:44:14.253290 | orchestrator | Wednesday 04 February 2026 00:44:10 +0000 (0:00:00.209) 0:00:12.779 **** 2026-02-04 00:44:14.253300 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.253310 | orchestrator | 2026-02-04 00:44:14.253319 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-04 00:44:14.253329 | orchestrator | Wednesday 04 February 2026 00:44:10 +0000 (0:00:00.172) 0:00:12.952 **** 2026-02-04 00:44:14.253338 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.253348 | orchestrator | 2026-02-04 00:44:14.253358 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-04 00:44:14.253367 | orchestrator | Wednesday 04 February 2026 00:44:10 +0000 (0:00:00.201) 0:00:13.154 **** 2026-02-04 00:44:14.253377 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:44:14.253387 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:44:14.253397 | orchestrator |  "sdb": { 2026-02-04 00:44:14.253407 | orchestrator |  "osd_lvm_uuid": "cab1220b-9ff6-5009-b197-fa753e4036d2" 2026-02-04 00:44:14.253417 | orchestrator |  }, 2026-02-04 00:44:14.253427 | orchestrator |  "sdc": { 2026-02-04 00:44:14.253437 | orchestrator |  "osd_lvm_uuid": "4adee4b4-d62b-5502-a742-8ac6c3138b01" 2026-02-04 00:44:14.253447 | orchestrator |  } 2026-02-04 00:44:14.253456 | orchestrator |  } 2026-02-04 00:44:14.253467 | orchestrator | } 2026-02-04 00:44:14.253477 | orchestrator | 2026-02-04 00:44:14.253486 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-04 00:44:14.253496 | orchestrator | Wednesday 04 February 2026 00:44:10 +0000 (0:00:00.258) 0:00:13.412 **** 2026-02-04 00:44:14.253506 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.253516 | orchestrator | 2026-02-04 00:44:14.253525 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-04 00:44:14.253535 | orchestrator | Wednesday 04 February 2026 00:44:10 +0000 (0:00:00.146) 0:00:13.559 **** 2026-02-04 00:44:14.253544 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.253592 | orchestrator | 2026-02-04 00:44:14.253605 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-04 00:44:14.253615 | orchestrator | Wednesday 04 February 2026 00:44:11 +0000 (0:00:00.139) 0:00:13.698 **** 2026-02-04 00:44:14.253625 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:44:14.253634 | orchestrator | 2026-02-04 00:44:14.253644 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-04 00:44:14.253654 | orchestrator | Wednesday 04 February 2026 00:44:11 +0000 (0:00:00.139) 0:00:13.837 **** 2026-02-04 00:44:14.253664 | orchestrator | changed: [testbed-node-3] => { 2026-02-04 00:44:14.253674 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-04 00:44:14.253684 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:44:14.253694 | orchestrator |  "sdb": { 2026-02-04 00:44:14.253703 | orchestrator |  "osd_lvm_uuid": "cab1220b-9ff6-5009-b197-fa753e4036d2" 2026-02-04 00:44:14.253713 | orchestrator |  }, 2026-02-04 00:44:14.253724 | orchestrator |  "sdc": { 2026-02-04 00:44:14.253734 | orchestrator |  "osd_lvm_uuid": "4adee4b4-d62b-5502-a742-8ac6c3138b01" 2026-02-04 00:44:14.253743 | orchestrator |  } 2026-02-04 00:44:14.253753 | orchestrator |  }, 2026-02-04 00:44:14.253763 | orchestrator |  "lvm_volumes": [ 2026-02-04 00:44:14.253773 | orchestrator |  { 2026-02-04 00:44:14.253783 | orchestrator |  "data": "osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2", 2026-02-04 00:44:14.253792 | orchestrator |  "data_vg": "ceph-cab1220b-9ff6-5009-b197-fa753e4036d2" 2026-02-04 00:44:14.253809 | orchestrator |  }, 2026-02-04 00:44:14.253818 | orchestrator |  { 2026-02-04 00:44:14.253828 | orchestrator |  "data": "osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01", 2026-02-04 00:44:14.253838 | orchestrator |  "data_vg": "ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01" 2026-02-04 00:44:14.253848 | orchestrator |  } 2026-02-04 00:44:14.253857 | orchestrator |  ] 2026-02-04 00:44:14.253867 | orchestrator |  } 2026-02-04 00:44:14.253877 | orchestrator | } 2026-02-04 00:44:14.253887 | orchestrator | 2026-02-04 00:44:14.253897 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-04 00:44:14.253906 | orchestrator | Wednesday 04 February 2026 00:44:11 +0000 (0:00:00.554) 0:00:14.392 **** 2026-02-04 00:44:14.253916 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 00:44:14.253926 | orchestrator | 2026-02-04 00:44:14.253935 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-04 00:44:14.253945 | orchestrator | 2026-02-04 00:44:14.253954 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:44:14.253964 | orchestrator | Wednesday 04 February 2026 00:44:13 +0000 (0:00:01.923) 0:00:16.315 **** 2026-02-04 00:44:14.253974 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-04 00:44:14.253983 | orchestrator | 2026-02-04 00:44:14.253998 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:44:14.254008 | orchestrator | Wednesday 04 February 2026 00:44:14 +0000 (0:00:00.272) 0:00:16.588 **** 2026-02-04 00:44:14.254070 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:44:14.254081 | orchestrator | 2026-02-04 00:44:14.254099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.971607 | orchestrator | Wednesday 04 February 2026 00:44:14 +0000 (0:00:00.231) 0:00:16.820 **** 2026-02-04 00:44:23.971699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-04 00:44:23.971711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-04 00:44:23.971719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-04 00:44:23.971727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-04 00:44:23.971734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-04 00:44:23.971742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-04 00:44:23.971749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-04 00:44:23.971760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-04 00:44:23.971768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-04 00:44:23.971776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-04 00:44:23.971784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-04 00:44:23.971791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-04 00:44:23.971799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-04 00:44:23.971806 | orchestrator | 2026-02-04 00:44:23.971814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.971822 | orchestrator | Wednesday 04 February 2026 00:44:14 +0000 (0:00:00.443) 0:00:17.263 **** 2026-02-04 00:44:23.971831 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.971845 | orchestrator | 2026-02-04 00:44:23.971858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.971870 | orchestrator | Wednesday 04 February 2026 00:44:15 +0000 (0:00:00.416) 0:00:17.680 **** 2026-02-04 00:44:23.971910 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.971924 | orchestrator | 2026-02-04 00:44:23.971938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.971946 | orchestrator | Wednesday 04 February 2026 00:44:15 +0000 (0:00:00.202) 0:00:17.883 **** 2026-02-04 00:44:23.971953 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.971960 | orchestrator | 2026-02-04 00:44:23.971968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.971975 | orchestrator | Wednesday 04 February 2026 00:44:15 +0000 (0:00:00.194) 0:00:18.078 **** 2026-02-04 00:44:23.971982 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.971989 | orchestrator | 2026-02-04 00:44:23.971997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.972004 | orchestrator | Wednesday 04 February 2026 00:44:15 +0000 (0:00:00.192) 0:00:18.270 **** 2026-02-04 00:44:23.972011 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972019 | orchestrator | 2026-02-04 00:44:23.972026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.972033 | orchestrator | Wednesday 04 February 2026 00:44:16 +0000 (0:00:00.680) 0:00:18.951 **** 2026-02-04 00:44:23.972041 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972048 | orchestrator | 2026-02-04 00:44:23.972055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.972062 | orchestrator | Wednesday 04 February 2026 00:44:16 +0000 (0:00:00.220) 0:00:19.172 **** 2026-02-04 00:44:23.972069 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972075 | orchestrator | 2026-02-04 00:44:23.972082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.972089 | orchestrator | Wednesday 04 February 2026 00:44:16 +0000 (0:00:00.211) 0:00:19.383 **** 2026-02-04 00:44:23.972095 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972104 | orchestrator | 2026-02-04 00:44:23.972115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.972127 | orchestrator | Wednesday 04 February 2026 00:44:17 +0000 (0:00:00.228) 0:00:19.612 **** 2026-02-04 00:44:23.972138 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e) 2026-02-04 00:44:23.972150 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e) 2026-02-04 00:44:23.972162 | orchestrator | 2026-02-04 00:44:23.972189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.972201 | orchestrator | Wednesday 04 February 2026 00:44:17 +0000 (0:00:00.527) 0:00:20.140 **** 2026-02-04 00:44:23.972210 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d) 2026-02-04 00:44:23.972218 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d) 2026-02-04 00:44:23.972230 | orchestrator | 2026-02-04 00:44:23.972241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.972253 | orchestrator | Wednesday 04 February 2026 00:44:18 +0000 (0:00:00.506) 0:00:20.646 **** 2026-02-04 00:44:23.972265 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d) 2026-02-04 00:44:23.972277 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d) 2026-02-04 00:44:23.972288 | orchestrator | 2026-02-04 00:44:23.972301 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.972329 | orchestrator | Wednesday 04 February 2026 00:44:18 +0000 (0:00:00.545) 0:00:21.192 **** 2026-02-04 00:44:23.972342 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4) 2026-02-04 00:44:23.972354 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4) 2026-02-04 00:44:23.972365 | orchestrator | 2026-02-04 00:44:23.972388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:23.972400 | orchestrator | Wednesday 04 February 2026 00:44:19 +0000 (0:00:00.468) 0:00:21.660 **** 2026-02-04 00:44:23.972411 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:44:23.972422 | orchestrator | 2026-02-04 00:44:23.972433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:23.972445 | orchestrator | Wednesday 04 February 2026 00:44:19 +0000 (0:00:00.363) 0:00:22.023 **** 2026-02-04 00:44:23.972456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-04 00:44:23.972466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-04 00:44:23.972478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-04 00:44:23.972488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-04 00:44:23.972498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-04 00:44:23.972510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-04 00:44:23.972522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-04 00:44:23.972533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-04 00:44:23.972543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-04 00:44:23.972569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-04 00:44:23.972579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-04 00:44:23.972590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-04 00:44:23.972601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-04 00:44:23.972612 | orchestrator | 2026-02-04 00:44:23.972623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:23.972634 | orchestrator | Wednesday 04 February 2026 00:44:19 +0000 (0:00:00.445) 0:00:22.469 **** 2026-02-04 00:44:23.972644 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972655 | orchestrator | 2026-02-04 00:44:23.972666 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:23.972677 | orchestrator | Wednesday 04 February 2026 00:44:20 +0000 (0:00:00.722) 0:00:23.192 **** 2026-02-04 00:44:23.972689 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972701 | orchestrator | 2026-02-04 00:44:23.972713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:23.972724 | orchestrator | Wednesday 04 February 2026 00:44:20 +0000 (0:00:00.250) 0:00:23.442 **** 2026-02-04 00:44:23.972735 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972747 | orchestrator | 2026-02-04 00:44:23.972758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:23.972769 | orchestrator | Wednesday 04 February 2026 00:44:21 +0000 (0:00:00.212) 0:00:23.655 **** 2026-02-04 00:44:23.972780 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972790 | orchestrator | 2026-02-04 00:44:23.972801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:23.972812 | orchestrator | Wednesday 04 February 2026 00:44:21 +0000 (0:00:00.227) 0:00:23.882 **** 2026-02-04 00:44:23.972824 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972834 | orchestrator | 2026-02-04 00:44:23.972841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:23.972848 | orchestrator | Wednesday 04 February 2026 00:44:21 +0000 (0:00:00.242) 0:00:24.125 **** 2026-02-04 00:44:23.972855 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972868 | orchestrator | 2026-02-04 00:44:23.972881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:23.972888 | orchestrator | Wednesday 04 February 2026 00:44:21 +0000 (0:00:00.371) 0:00:24.496 **** 2026-02-04 00:44:23.972895 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972901 | orchestrator | 2026-02-04 00:44:23.972908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:23.972915 | orchestrator | Wednesday 04 February 2026 00:44:22 +0000 (0:00:00.297) 0:00:24.794 **** 2026-02-04 00:44:23.972921 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:23.972928 | orchestrator | 2026-02-04 00:44:23.972934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:23.972941 | orchestrator | Wednesday 04 February 2026 00:44:22 +0000 (0:00:00.216) 0:00:25.011 **** 2026-02-04 00:44:23.972949 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-04 00:44:23.972962 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-04 00:44:23.972973 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-04 00:44:23.972984 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-04 00:44:23.972995 | orchestrator | 2026-02-04 00:44:23.973006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:23.973017 | orchestrator | Wednesday 04 February 2026 00:44:23 +0000 (0:00:01.318) 0:00:26.329 **** 2026-02-04 00:44:23.973028 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563428 | orchestrator | 2026-02-04 00:44:31.563508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:31.563516 | orchestrator | Wednesday 04 February 2026 00:44:24 +0000 (0:00:00.366) 0:00:26.695 **** 2026-02-04 00:44:31.563520 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563526 | orchestrator | 2026-02-04 00:44:31.563531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:31.563535 | orchestrator | Wednesday 04 February 2026 00:44:24 +0000 (0:00:00.333) 0:00:27.029 **** 2026-02-04 00:44:31.563539 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563543 | orchestrator | 2026-02-04 00:44:31.563547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:31.563575 | orchestrator | Wednesday 04 February 2026 00:44:24 +0000 (0:00:00.220) 0:00:27.249 **** 2026-02-04 00:44:31.563581 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563587 | orchestrator | 2026-02-04 00:44:31.563592 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-04 00:44:31.563598 | orchestrator | Wednesday 04 February 2026 00:44:25 +0000 (0:00:00.620) 0:00:27.870 **** 2026-02-04 00:44:31.563605 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-04 00:44:31.563611 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-04 00:44:31.563617 | orchestrator | 2026-02-04 00:44:31.563632 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-04 00:44:31.563640 | orchestrator | Wednesday 04 February 2026 00:44:25 +0000 (0:00:00.230) 0:00:28.100 **** 2026-02-04 00:44:31.563644 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563648 | orchestrator | 2026-02-04 00:44:31.563652 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-04 00:44:31.563657 | orchestrator | Wednesday 04 February 2026 00:44:25 +0000 (0:00:00.160) 0:00:28.261 **** 2026-02-04 00:44:31.563660 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563664 | orchestrator | 2026-02-04 00:44:31.563668 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-04 00:44:31.563672 | orchestrator | Wednesday 04 February 2026 00:44:25 +0000 (0:00:00.157) 0:00:28.419 **** 2026-02-04 00:44:31.563676 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563680 | orchestrator | 2026-02-04 00:44:31.563684 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-04 00:44:31.563688 | orchestrator | Wednesday 04 February 2026 00:44:26 +0000 (0:00:00.157) 0:00:28.577 **** 2026-02-04 00:44:31.563708 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:44:31.563713 | orchestrator | 2026-02-04 00:44:31.563717 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-04 00:44:31.563721 | orchestrator | Wednesday 04 February 2026 00:44:26 +0000 (0:00:00.128) 0:00:28.705 **** 2026-02-04 00:44:31.563726 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6cd3944c-50dd-590e-9699-94e09e9b1959'}}) 2026-02-04 00:44:31.563730 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '197bc0b1-bda8-5def-b850-786176b935dd'}}) 2026-02-04 00:44:31.563734 | orchestrator | 2026-02-04 00:44:31.563738 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-04 00:44:31.563742 | orchestrator | Wednesday 04 February 2026 00:44:26 +0000 (0:00:00.218) 0:00:28.924 **** 2026-02-04 00:44:31.563746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6cd3944c-50dd-590e-9699-94e09e9b1959'}})  2026-02-04 00:44:31.563752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '197bc0b1-bda8-5def-b850-786176b935dd'}})  2026-02-04 00:44:31.563756 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563760 | orchestrator | 2026-02-04 00:44:31.563763 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-04 00:44:31.563767 | orchestrator | Wednesday 04 February 2026 00:44:26 +0000 (0:00:00.196) 0:00:29.121 **** 2026-02-04 00:44:31.563771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6cd3944c-50dd-590e-9699-94e09e9b1959'}})  2026-02-04 00:44:31.563775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '197bc0b1-bda8-5def-b850-786176b935dd'}})  2026-02-04 00:44:31.563779 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563783 | orchestrator | 2026-02-04 00:44:31.563787 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-04 00:44:31.563791 | orchestrator | Wednesday 04 February 2026 00:44:26 +0000 (0:00:00.175) 0:00:29.296 **** 2026-02-04 00:44:31.563794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6cd3944c-50dd-590e-9699-94e09e9b1959'}})  2026-02-04 00:44:31.563798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '197bc0b1-bda8-5def-b850-786176b935dd'}})  2026-02-04 00:44:31.563802 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563806 | orchestrator | 2026-02-04 00:44:31.563822 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-04 00:44:31.563826 | orchestrator | Wednesday 04 February 2026 00:44:26 +0000 (0:00:00.147) 0:00:29.444 **** 2026-02-04 00:44:31.563830 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:44:31.563834 | orchestrator | 2026-02-04 00:44:31.563838 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-04 00:44:31.563841 | orchestrator | Wednesday 04 February 2026 00:44:27 +0000 (0:00:00.136) 0:00:29.581 **** 2026-02-04 00:44:31.563845 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:44:31.563849 | orchestrator | 2026-02-04 00:44:31.563853 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-04 00:44:31.563857 | orchestrator | Wednesday 04 February 2026 00:44:27 +0000 (0:00:00.135) 0:00:29.716 **** 2026-02-04 00:44:31.563872 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563876 | orchestrator | 2026-02-04 00:44:31.563880 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-04 00:44:31.563884 | orchestrator | Wednesday 04 February 2026 00:44:27 +0000 (0:00:00.381) 0:00:30.097 **** 2026-02-04 00:44:31.563888 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563892 | orchestrator | 2026-02-04 00:44:31.563896 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-04 00:44:31.563899 | orchestrator | Wednesday 04 February 2026 00:44:27 +0000 (0:00:00.168) 0:00:30.266 **** 2026-02-04 00:44:31.563903 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563911 | orchestrator | 2026-02-04 00:44:31.563915 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-04 00:44:31.563918 | orchestrator | Wednesday 04 February 2026 00:44:27 +0000 (0:00:00.143) 0:00:30.409 **** 2026-02-04 00:44:31.563922 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:44:31.563926 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:44:31.563930 | orchestrator |  "sdb": { 2026-02-04 00:44:31.563935 | orchestrator |  "osd_lvm_uuid": "6cd3944c-50dd-590e-9699-94e09e9b1959" 2026-02-04 00:44:31.563939 | orchestrator |  }, 2026-02-04 00:44:31.563943 | orchestrator |  "sdc": { 2026-02-04 00:44:31.563947 | orchestrator |  "osd_lvm_uuid": "197bc0b1-bda8-5def-b850-786176b935dd" 2026-02-04 00:44:31.563951 | orchestrator |  } 2026-02-04 00:44:31.563955 | orchestrator |  } 2026-02-04 00:44:31.563959 | orchestrator | } 2026-02-04 00:44:31.563963 | orchestrator | 2026-02-04 00:44:31.563967 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-04 00:44:31.563971 | orchestrator | Wednesday 04 February 2026 00:44:27 +0000 (0:00:00.142) 0:00:30.552 **** 2026-02-04 00:44:31.563975 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563979 | orchestrator | 2026-02-04 00:44:31.563984 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-04 00:44:31.563989 | orchestrator | Wednesday 04 February 2026 00:44:28 +0000 (0:00:00.153) 0:00:30.705 **** 2026-02-04 00:44:31.563993 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.563997 | orchestrator | 2026-02-04 00:44:31.564002 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-04 00:44:31.564007 | orchestrator | Wednesday 04 February 2026 00:44:28 +0000 (0:00:00.165) 0:00:30.870 **** 2026-02-04 00:44:31.564011 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:44:31.564015 | orchestrator | 2026-02-04 00:44:31.564020 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-04 00:44:31.564024 | orchestrator | Wednesday 04 February 2026 00:44:28 +0000 (0:00:00.129) 0:00:31.000 **** 2026-02-04 00:44:31.564029 | orchestrator | changed: [testbed-node-4] => { 2026-02-04 00:44:31.564033 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-04 00:44:31.564038 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:44:31.564043 | orchestrator |  "sdb": { 2026-02-04 00:44:31.564047 | orchestrator |  "osd_lvm_uuid": "6cd3944c-50dd-590e-9699-94e09e9b1959" 2026-02-04 00:44:31.564052 | orchestrator |  }, 2026-02-04 00:44:31.564056 | orchestrator |  "sdc": { 2026-02-04 00:44:31.564061 | orchestrator |  "osd_lvm_uuid": "197bc0b1-bda8-5def-b850-786176b935dd" 2026-02-04 00:44:31.564065 | orchestrator |  } 2026-02-04 00:44:31.564070 | orchestrator |  }, 2026-02-04 00:44:31.564074 | orchestrator |  "lvm_volumes": [ 2026-02-04 00:44:31.564079 | orchestrator |  { 2026-02-04 00:44:31.564084 | orchestrator |  "data": "osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959", 2026-02-04 00:44:31.564089 | orchestrator |  "data_vg": "ceph-6cd3944c-50dd-590e-9699-94e09e9b1959" 2026-02-04 00:44:31.564093 | orchestrator |  }, 2026-02-04 00:44:31.564097 | orchestrator |  { 2026-02-04 00:44:31.564102 | orchestrator |  "data": "osd-block-197bc0b1-bda8-5def-b850-786176b935dd", 2026-02-04 00:44:31.564106 | orchestrator |  "data_vg": "ceph-197bc0b1-bda8-5def-b850-786176b935dd" 2026-02-04 00:44:31.564111 | orchestrator |  } 2026-02-04 00:44:31.564115 | orchestrator |  ] 2026-02-04 00:44:31.564120 | orchestrator |  } 2026-02-04 00:44:31.564124 | orchestrator | } 2026-02-04 00:44:31.564129 | orchestrator | 2026-02-04 00:44:31.564133 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-04 00:44:31.564138 | orchestrator | Wednesday 04 February 2026 00:44:28 +0000 (0:00:00.212) 0:00:31.213 **** 2026-02-04 00:44:31.564142 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-04 00:44:31.564147 | orchestrator | 2026-02-04 00:44:31.564155 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-04 00:44:31.564159 | orchestrator | 2026-02-04 00:44:31.564164 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:44:31.564168 | orchestrator | Wednesday 04 February 2026 00:44:29 +0000 (0:00:01.283) 0:00:32.497 **** 2026-02-04 00:44:31.564172 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-04 00:44:31.564177 | orchestrator | 2026-02-04 00:44:31.564181 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:44:31.564186 | orchestrator | Wednesday 04 February 2026 00:44:30 +0000 (0:00:00.962) 0:00:33.459 **** 2026-02-04 00:44:31.564191 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:44:31.564195 | orchestrator | 2026-02-04 00:44:31.564200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:31.564204 | orchestrator | Wednesday 04 February 2026 00:44:31 +0000 (0:00:00.278) 0:00:33.738 **** 2026-02-04 00:44:31.564208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-04 00:44:31.564213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-04 00:44:31.564217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-04 00:44:31.564222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-04 00:44:31.564226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-04 00:44:31.564233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-04 00:44:41.038342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-04 00:44:41.038425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-04 00:44:41.038432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-04 00:44:41.038438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-04 00:44:41.038457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-04 00:44:41.038462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-04 00:44:41.038468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-04 00:44:41.038473 | orchestrator | 2026-02-04 00:44:41.038479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038486 | orchestrator | Wednesday 04 February 2026 00:44:31 +0000 (0:00:00.482) 0:00:34.220 **** 2026-02-04 00:44:41.038491 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.038498 | orchestrator | 2026-02-04 00:44:41.038504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038508 | orchestrator | Wednesday 04 February 2026 00:44:32 +0000 (0:00:00.360) 0:00:34.581 **** 2026-02-04 00:44:41.038513 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.038519 | orchestrator | 2026-02-04 00:44:41.038524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038529 | orchestrator | Wednesday 04 February 2026 00:44:32 +0000 (0:00:00.238) 0:00:34.820 **** 2026-02-04 00:44:41.038534 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.038539 | orchestrator | 2026-02-04 00:44:41.038543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038634 | orchestrator | Wednesday 04 February 2026 00:44:32 +0000 (0:00:00.216) 0:00:35.036 **** 2026-02-04 00:44:41.038647 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.038656 | orchestrator | 2026-02-04 00:44:41.038664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038672 | orchestrator | Wednesday 04 February 2026 00:44:32 +0000 (0:00:00.200) 0:00:35.237 **** 2026-02-04 00:44:41.038700 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.038706 | orchestrator | 2026-02-04 00:44:41.038711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038716 | orchestrator | Wednesday 04 February 2026 00:44:32 +0000 (0:00:00.236) 0:00:35.473 **** 2026-02-04 00:44:41.038721 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.038726 | orchestrator | 2026-02-04 00:44:41.038731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038736 | orchestrator | Wednesday 04 February 2026 00:44:33 +0000 (0:00:00.250) 0:00:35.724 **** 2026-02-04 00:44:41.038741 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.038745 | orchestrator | 2026-02-04 00:44:41.038751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038756 | orchestrator | Wednesday 04 February 2026 00:44:33 +0000 (0:00:00.249) 0:00:35.973 **** 2026-02-04 00:44:41.038761 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.038766 | orchestrator | 2026-02-04 00:44:41.038773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038781 | orchestrator | Wednesday 04 February 2026 00:44:33 +0000 (0:00:00.249) 0:00:36.223 **** 2026-02-04 00:44:41.038792 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14) 2026-02-04 00:44:41.038805 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14) 2026-02-04 00:44:41.038812 | orchestrator | 2026-02-04 00:44:41.038819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038828 | orchestrator | Wednesday 04 February 2026 00:44:34 +0000 (0:00:00.962) 0:00:37.186 **** 2026-02-04 00:44:41.038835 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08) 2026-02-04 00:44:41.038842 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08) 2026-02-04 00:44:41.038849 | orchestrator | 2026-02-04 00:44:41.038856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038863 | orchestrator | Wednesday 04 February 2026 00:44:35 +0000 (0:00:00.509) 0:00:37.695 **** 2026-02-04 00:44:41.038870 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508) 2026-02-04 00:44:41.038878 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508) 2026-02-04 00:44:41.038885 | orchestrator | 2026-02-04 00:44:41.038893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038900 | orchestrator | Wednesday 04 February 2026 00:44:35 +0000 (0:00:00.515) 0:00:38.211 **** 2026-02-04 00:44:41.038907 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89) 2026-02-04 00:44:41.038914 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89) 2026-02-04 00:44:41.038922 | orchestrator | 2026-02-04 00:44:41.038930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:44:41.038937 | orchestrator | Wednesday 04 February 2026 00:44:36 +0000 (0:00:00.508) 0:00:38.719 **** 2026-02-04 00:44:41.038945 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:44:41.038952 | orchestrator | 2026-02-04 00:44:41.038960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.038986 | orchestrator | Wednesday 04 February 2026 00:44:36 +0000 (0:00:00.353) 0:00:39.073 **** 2026-02-04 00:44:41.038995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-04 00:44:41.039003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-04 00:44:41.039012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-04 00:44:41.039018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-04 00:44:41.039031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-04 00:44:41.039036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-04 00:44:41.039041 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-04 00:44:41.039046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-04 00:44:41.039050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-04 00:44:41.039055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-04 00:44:41.039060 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-04 00:44:41.039065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-04 00:44:41.039070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-04 00:44:41.039075 | orchestrator | 2026-02-04 00:44:41.039080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039084 | orchestrator | Wednesday 04 February 2026 00:44:36 +0000 (0:00:00.453) 0:00:39.527 **** 2026-02-04 00:44:41.039089 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039094 | orchestrator | 2026-02-04 00:44:41.039099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039104 | orchestrator | Wednesday 04 February 2026 00:44:37 +0000 (0:00:00.204) 0:00:39.731 **** 2026-02-04 00:44:41.039109 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039114 | orchestrator | 2026-02-04 00:44:41.039118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039123 | orchestrator | Wednesday 04 February 2026 00:44:37 +0000 (0:00:00.229) 0:00:39.960 **** 2026-02-04 00:44:41.039128 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039133 | orchestrator | 2026-02-04 00:44:41.039138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039148 | orchestrator | Wednesday 04 February 2026 00:44:37 +0000 (0:00:00.209) 0:00:40.170 **** 2026-02-04 00:44:41.039153 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039158 | orchestrator | 2026-02-04 00:44:41.039163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039167 | orchestrator | Wednesday 04 February 2026 00:44:37 +0000 (0:00:00.218) 0:00:40.388 **** 2026-02-04 00:44:41.039172 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039177 | orchestrator | 2026-02-04 00:44:41.039182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039187 | orchestrator | Wednesday 04 February 2026 00:44:38 +0000 (0:00:00.208) 0:00:40.597 **** 2026-02-04 00:44:41.039192 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039197 | orchestrator | 2026-02-04 00:44:41.039202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039209 | orchestrator | Wednesday 04 February 2026 00:44:38 +0000 (0:00:00.743) 0:00:41.341 **** 2026-02-04 00:44:41.039216 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039223 | orchestrator | 2026-02-04 00:44:41.039233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039242 | orchestrator | Wednesday 04 February 2026 00:44:39 +0000 (0:00:00.243) 0:00:41.584 **** 2026-02-04 00:44:41.039251 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039258 | orchestrator | 2026-02-04 00:44:41.039265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039273 | orchestrator | Wednesday 04 February 2026 00:44:39 +0000 (0:00:00.234) 0:00:41.819 **** 2026-02-04 00:44:41.039280 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-04 00:44:41.039293 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-04 00:44:41.039301 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-04 00:44:41.039309 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-04 00:44:41.039316 | orchestrator | 2026-02-04 00:44:41.039323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039331 | orchestrator | Wednesday 04 February 2026 00:44:40 +0000 (0:00:00.803) 0:00:42.623 **** 2026-02-04 00:44:41.039338 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039345 | orchestrator | 2026-02-04 00:44:41.039353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039361 | orchestrator | Wednesday 04 February 2026 00:44:40 +0000 (0:00:00.260) 0:00:42.884 **** 2026-02-04 00:44:41.039368 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039376 | orchestrator | 2026-02-04 00:44:41.039382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039387 | orchestrator | Wednesday 04 February 2026 00:44:40 +0000 (0:00:00.220) 0:00:43.104 **** 2026-02-04 00:44:41.039391 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039396 | orchestrator | 2026-02-04 00:44:41.039400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:44:41.039405 | orchestrator | Wednesday 04 February 2026 00:44:40 +0000 (0:00:00.251) 0:00:43.356 **** 2026-02-04 00:44:41.039409 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:41.039414 | orchestrator | 2026-02-04 00:44:41.039425 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-04 00:44:46.347445 | orchestrator | Wednesday 04 February 2026 00:44:41 +0000 (0:00:00.249) 0:00:43.605 **** 2026-02-04 00:44:46.347655 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-04 00:44:46.347707 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-04 00:44:46.347721 | orchestrator | 2026-02-04 00:44:46.347734 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-04 00:44:46.347745 | orchestrator | Wednesday 04 February 2026 00:44:41 +0000 (0:00:00.200) 0:00:43.806 **** 2026-02-04 00:44:46.347757 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.347770 | orchestrator | 2026-02-04 00:44:46.347781 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-04 00:44:46.347792 | orchestrator | Wednesday 04 February 2026 00:44:41 +0000 (0:00:00.197) 0:00:44.004 **** 2026-02-04 00:44:46.347803 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.347815 | orchestrator | 2026-02-04 00:44:46.347826 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-04 00:44:46.347837 | orchestrator | Wednesday 04 February 2026 00:44:41 +0000 (0:00:00.141) 0:00:44.146 **** 2026-02-04 00:44:46.347848 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.347859 | orchestrator | 2026-02-04 00:44:46.347871 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-04 00:44:46.347882 | orchestrator | Wednesday 04 February 2026 00:44:42 +0000 (0:00:00.463) 0:00:44.609 **** 2026-02-04 00:44:46.347894 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:44:46.347906 | orchestrator | 2026-02-04 00:44:46.347917 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-04 00:44:46.347928 | orchestrator | Wednesday 04 February 2026 00:44:42 +0000 (0:00:00.174) 0:00:44.784 **** 2026-02-04 00:44:46.347939 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e3daecb5-9fd0-5834-b191-078d341d10dc'}}) 2026-02-04 00:44:46.347951 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '607d890d-3e41-57a1-9874-83b389fa50fb'}}) 2026-02-04 00:44:46.347962 | orchestrator | 2026-02-04 00:44:46.347973 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-04 00:44:46.347984 | orchestrator | Wednesday 04 February 2026 00:44:42 +0000 (0:00:00.163) 0:00:44.947 **** 2026-02-04 00:44:46.347996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e3daecb5-9fd0-5834-b191-078d341d10dc'}})  2026-02-04 00:44:46.348041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '607d890d-3e41-57a1-9874-83b389fa50fb'}})  2026-02-04 00:44:46.348053 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.348065 | orchestrator | 2026-02-04 00:44:46.348076 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-04 00:44:46.348086 | orchestrator | Wednesday 04 February 2026 00:44:42 +0000 (0:00:00.180) 0:00:45.127 **** 2026-02-04 00:44:46.348097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e3daecb5-9fd0-5834-b191-078d341d10dc'}})  2026-02-04 00:44:46.348108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '607d890d-3e41-57a1-9874-83b389fa50fb'}})  2026-02-04 00:44:46.348119 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.348130 | orchestrator | 2026-02-04 00:44:46.348141 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-04 00:44:46.348157 | orchestrator | Wednesday 04 February 2026 00:44:42 +0000 (0:00:00.174) 0:00:45.301 **** 2026-02-04 00:44:46.348177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e3daecb5-9fd0-5834-b191-078d341d10dc'}})  2026-02-04 00:44:46.348196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '607d890d-3e41-57a1-9874-83b389fa50fb'}})  2026-02-04 00:44:46.348211 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.348222 | orchestrator | 2026-02-04 00:44:46.348233 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-04 00:44:46.348244 | orchestrator | Wednesday 04 February 2026 00:44:42 +0000 (0:00:00.176) 0:00:45.477 **** 2026-02-04 00:44:46.348255 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:44:46.348266 | orchestrator | 2026-02-04 00:44:46.348277 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-04 00:44:46.348288 | orchestrator | Wednesday 04 February 2026 00:44:43 +0000 (0:00:00.173) 0:00:45.651 **** 2026-02-04 00:44:46.348299 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:44:46.348310 | orchestrator | 2026-02-04 00:44:46.348321 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-04 00:44:46.348331 | orchestrator | Wednesday 04 February 2026 00:44:43 +0000 (0:00:00.185) 0:00:45.836 **** 2026-02-04 00:44:46.348342 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.348353 | orchestrator | 2026-02-04 00:44:46.348364 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-04 00:44:46.348375 | orchestrator | Wednesday 04 February 2026 00:44:43 +0000 (0:00:00.164) 0:00:46.001 **** 2026-02-04 00:44:46.348386 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.348397 | orchestrator | 2026-02-04 00:44:46.348407 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-04 00:44:46.348418 | orchestrator | Wednesday 04 February 2026 00:44:43 +0000 (0:00:00.166) 0:00:46.167 **** 2026-02-04 00:44:46.348433 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.348452 | orchestrator | 2026-02-04 00:44:46.348469 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-04 00:44:46.348486 | orchestrator | Wednesday 04 February 2026 00:44:43 +0000 (0:00:00.163) 0:00:46.330 **** 2026-02-04 00:44:46.348503 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:44:46.348520 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:44:46.348538 | orchestrator |  "sdb": { 2026-02-04 00:44:46.348621 | orchestrator |  "osd_lvm_uuid": "e3daecb5-9fd0-5834-b191-078d341d10dc" 2026-02-04 00:44:46.348644 | orchestrator |  }, 2026-02-04 00:44:46.348662 | orchestrator |  "sdc": { 2026-02-04 00:44:46.348705 | orchestrator |  "osd_lvm_uuid": "607d890d-3e41-57a1-9874-83b389fa50fb" 2026-02-04 00:44:46.348727 | orchestrator |  } 2026-02-04 00:44:46.348748 | orchestrator |  } 2026-02-04 00:44:46.348769 | orchestrator | } 2026-02-04 00:44:46.348790 | orchestrator | 2026-02-04 00:44:46.348830 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-04 00:44:46.348852 | orchestrator | Wednesday 04 February 2026 00:44:43 +0000 (0:00:00.194) 0:00:46.525 **** 2026-02-04 00:44:46.348874 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.348891 | orchestrator | 2026-02-04 00:44:46.348939 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-04 00:44:46.348964 | orchestrator | Wednesday 04 February 2026 00:44:44 +0000 (0:00:00.432) 0:00:46.957 **** 2026-02-04 00:44:46.348988 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.349008 | orchestrator | 2026-02-04 00:44:46.349031 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-04 00:44:46.349055 | orchestrator | Wednesday 04 February 2026 00:44:44 +0000 (0:00:00.199) 0:00:47.156 **** 2026-02-04 00:44:46.349078 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:44:46.349099 | orchestrator | 2026-02-04 00:44:46.349120 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-04 00:44:46.349141 | orchestrator | Wednesday 04 February 2026 00:44:44 +0000 (0:00:00.213) 0:00:47.370 **** 2026-02-04 00:44:46.349163 | orchestrator | changed: [testbed-node-5] => { 2026-02-04 00:44:46.349185 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-04 00:44:46.349207 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:44:46.349229 | orchestrator |  "sdb": { 2026-02-04 00:44:46.349248 | orchestrator |  "osd_lvm_uuid": "e3daecb5-9fd0-5834-b191-078d341d10dc" 2026-02-04 00:44:46.349268 | orchestrator |  }, 2026-02-04 00:44:46.349287 | orchestrator |  "sdc": { 2026-02-04 00:44:46.349321 | orchestrator |  "osd_lvm_uuid": "607d890d-3e41-57a1-9874-83b389fa50fb" 2026-02-04 00:44:46.349342 | orchestrator |  } 2026-02-04 00:44:46.349360 | orchestrator |  }, 2026-02-04 00:44:46.349377 | orchestrator |  "lvm_volumes": [ 2026-02-04 00:44:46.349395 | orchestrator |  { 2026-02-04 00:44:46.349414 | orchestrator |  "data": "osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc", 2026-02-04 00:44:46.349432 | orchestrator |  "data_vg": "ceph-e3daecb5-9fd0-5834-b191-078d341d10dc" 2026-02-04 00:44:46.349450 | orchestrator |  }, 2026-02-04 00:44:46.349469 | orchestrator |  { 2026-02-04 00:44:46.349480 | orchestrator |  "data": "osd-block-607d890d-3e41-57a1-9874-83b389fa50fb", 2026-02-04 00:44:46.349491 | orchestrator |  "data_vg": "ceph-607d890d-3e41-57a1-9874-83b389fa50fb" 2026-02-04 00:44:46.349502 | orchestrator |  } 2026-02-04 00:44:46.349514 | orchestrator |  ] 2026-02-04 00:44:46.349524 | orchestrator |  } 2026-02-04 00:44:46.349536 | orchestrator | } 2026-02-04 00:44:46.349578 | orchestrator | 2026-02-04 00:44:46.349590 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-04 00:44:46.349601 | orchestrator | Wednesday 04 February 2026 00:44:45 +0000 (0:00:00.303) 0:00:47.673 **** 2026-02-04 00:44:46.349612 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-04 00:44:46.349624 | orchestrator | 2026-02-04 00:44:46.349635 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:44:46.349647 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 00:44:46.349659 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 00:44:46.349671 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 00:44:46.349684 | orchestrator | 2026-02-04 00:44:46.349701 | orchestrator | 2026-02-04 00:44:46.349719 | orchestrator | 2026-02-04 00:44:46.349737 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:44:46.349755 | orchestrator | Wednesday 04 February 2026 00:44:46 +0000 (0:00:01.205) 0:00:48.879 **** 2026-02-04 00:44:46.349783 | orchestrator | =============================================================================== 2026-02-04 00:44:46.349794 | orchestrator | Write configuration file ------------------------------------------------ 4.41s 2026-02-04 00:44:46.349804 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.49s 2026-02-04 00:44:46.349815 | orchestrator | Add known links to the list of available block devices ------------------ 1.46s 2026-02-04 00:44:46.349826 | orchestrator | Add known partitions to the list of available block devices ------------- 1.33s 2026-02-04 00:44:46.349837 | orchestrator | Add known partitions to the list of available block devices ------------- 1.32s 2026-02-04 00:44:46.349848 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2026-02-04 00:44:46.349859 | orchestrator | Print configuration data ------------------------------------------------ 1.07s 2026-02-04 00:44:46.349869 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2026-02-04 00:44:46.349880 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-02-04 00:44:46.349890 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2026-02-04 00:44:46.349902 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-02-04 00:44:46.349920 | orchestrator | Set DB devices config data ---------------------------------------------- 0.75s 2026-02-04 00:44:46.349938 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.74s 2026-02-04 00:44:46.349976 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-02-04 00:44:46.769141 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-02-04 00:44:46.769262 | orchestrator | Print WAL devices ------------------------------------------------------- 0.73s 2026-02-04 00:44:46.769280 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-02-04 00:44:46.769291 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.72s 2026-02-04 00:44:46.769303 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-02-04 00:44:46.769314 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-02-04 00:45:09.982593 | orchestrator | 2026-02-04 00:45:09 | INFO  | Task 3ddf0c69-ad97-4695-8c70-783ee266d89d (sync inventory) is running in background. Output coming soon. 2026-02-04 00:45:43.915286 | orchestrator | 2026-02-04 00:45:11 | INFO  | Starting group_vars file reorganization 2026-02-04 00:45:43.915396 | orchestrator | 2026-02-04 00:45:11 | INFO  | Moved 0 file(s) to their respective directories 2026-02-04 00:45:43.915412 | orchestrator | 2026-02-04 00:45:11 | INFO  | Group_vars file reorganization completed 2026-02-04 00:45:43.915424 | orchestrator | 2026-02-04 00:45:15 | INFO  | Starting variable preparation from inventory 2026-02-04 00:45:43.915436 | orchestrator | 2026-02-04 00:45:18 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-04 00:45:43.915448 | orchestrator | 2026-02-04 00:45:18 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-04 00:45:43.915459 | orchestrator | 2026-02-04 00:45:18 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-04 00:45:43.915470 | orchestrator | 2026-02-04 00:45:18 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-04 00:45:43.915482 | orchestrator | 2026-02-04 00:45:18 | INFO  | Variable preparation completed 2026-02-04 00:45:43.915493 | orchestrator | 2026-02-04 00:45:20 | INFO  | Starting inventory overwrite handling 2026-02-04 00:45:43.915504 | orchestrator | 2026-02-04 00:45:20 | INFO  | Handling group overwrites in 99-overwrite 2026-02-04 00:45:43.915515 | orchestrator | 2026-02-04 00:45:20 | INFO  | Removing group frr:children from 60-generic 2026-02-04 00:45:43.915572 | orchestrator | 2026-02-04 00:45:20 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-04 00:45:43.915585 | orchestrator | 2026-02-04 00:45:20 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-04 00:45:43.915596 | orchestrator | 2026-02-04 00:45:20 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-04 00:45:43.915607 | orchestrator | 2026-02-04 00:45:20 | INFO  | Handling group overwrites in 20-roles 2026-02-04 00:45:43.915618 | orchestrator | 2026-02-04 00:45:20 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-04 00:45:43.915629 | orchestrator | 2026-02-04 00:45:20 | INFO  | Removed 5 group(s) in total 2026-02-04 00:45:43.915640 | orchestrator | 2026-02-04 00:45:20 | INFO  | Inventory overwrite handling completed 2026-02-04 00:45:43.915651 | orchestrator | 2026-02-04 00:45:21 | INFO  | Starting merge of inventory files 2026-02-04 00:45:43.915662 | orchestrator | 2026-02-04 00:45:21 | INFO  | Inventory files merged successfully 2026-02-04 00:45:43.915674 | orchestrator | 2026-02-04 00:45:27 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-04 00:45:43.915685 | orchestrator | 2026-02-04 00:45:42 | INFO  | Successfully wrote ClusterShell configuration 2026-02-04 00:45:43.915697 | orchestrator | [master 2802844] 2026-02-04-00-45 2026-02-04 00:45:43.915711 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-04 00:45:47.469463 | orchestrator | 2026-02-04 00:45:47 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-02-04 00:45:47.550267 | orchestrator | 2026-02-04 00:45:47 | INFO  | Task 8bf8b2ef-7de3-41d7-b1b9-36dd2f747391 (ceph-create-lvm-devices) was prepared for execution. 2026-02-04 00:45:47.550349 | orchestrator | 2026-02-04 00:45:47 | INFO  | It takes a moment until task 8bf8b2ef-7de3-41d7-b1b9-36dd2f747391 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-04 00:46:00.932056 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 00:46:00.932140 | orchestrator | 2.16.14 2026-02-04 00:46:00.932151 | orchestrator | 2026-02-04 00:46:00.932159 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-04 00:46:00.932166 | orchestrator | 2026-02-04 00:46:00.932171 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:46:00.932176 | orchestrator | Wednesday 04 February 2026 00:45:52 +0000 (0:00:00.309) 0:00:00.309 **** 2026-02-04 00:46:00.932180 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 00:46:00.932185 | orchestrator | 2026-02-04 00:46:00.932189 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:46:00.932193 | orchestrator | Wednesday 04 February 2026 00:45:52 +0000 (0:00:00.269) 0:00:00.578 **** 2026-02-04 00:46:00.932198 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:46:00.932205 | orchestrator | 2026-02-04 00:46:00.932211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932218 | orchestrator | Wednesday 04 February 2026 00:45:53 +0000 (0:00:00.244) 0:00:00.823 **** 2026-02-04 00:46:00.932224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-04 00:46:00.932241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-04 00:46:00.932247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-04 00:46:00.932253 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-04 00:46:00.932260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-04 00:46:00.932267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-04 00:46:00.932273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-04 00:46:00.932294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-04 00:46:00.932298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-04 00:46:00.932302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-04 00:46:00.932308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-04 00:46:00.932314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-04 00:46:00.932332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-04 00:46:00.932339 | orchestrator | 2026-02-04 00:46:00.932346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932352 | orchestrator | Wednesday 04 February 2026 00:45:53 +0000 (0:00:00.605) 0:00:01.428 **** 2026-02-04 00:46:00.932358 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932362 | orchestrator | 2026-02-04 00:46:00.932366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932369 | orchestrator | Wednesday 04 February 2026 00:45:53 +0000 (0:00:00.248) 0:00:01.677 **** 2026-02-04 00:46:00.932373 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932377 | orchestrator | 2026-02-04 00:46:00.932381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932385 | orchestrator | Wednesday 04 February 2026 00:45:54 +0000 (0:00:00.215) 0:00:01.892 **** 2026-02-04 00:46:00.932389 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932392 | orchestrator | 2026-02-04 00:46:00.932396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932400 | orchestrator | Wednesday 04 February 2026 00:45:54 +0000 (0:00:00.277) 0:00:02.170 **** 2026-02-04 00:46:00.932404 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932408 | orchestrator | 2026-02-04 00:46:00.932411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932415 | orchestrator | Wednesday 04 February 2026 00:45:54 +0000 (0:00:00.233) 0:00:02.403 **** 2026-02-04 00:46:00.932419 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932423 | orchestrator | 2026-02-04 00:46:00.932426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932430 | orchestrator | Wednesday 04 February 2026 00:45:54 +0000 (0:00:00.234) 0:00:02.638 **** 2026-02-04 00:46:00.932434 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932438 | orchestrator | 2026-02-04 00:46:00.932441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932445 | orchestrator | Wednesday 04 February 2026 00:45:55 +0000 (0:00:00.198) 0:00:02.837 **** 2026-02-04 00:46:00.932449 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932453 | orchestrator | 2026-02-04 00:46:00.932457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932460 | orchestrator | Wednesday 04 February 2026 00:45:55 +0000 (0:00:00.205) 0:00:03.042 **** 2026-02-04 00:46:00.932464 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932468 | orchestrator | 2026-02-04 00:46:00.932472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932476 | orchestrator | Wednesday 04 February 2026 00:45:55 +0000 (0:00:00.212) 0:00:03.255 **** 2026-02-04 00:46:00.932480 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6) 2026-02-04 00:46:00.932485 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6) 2026-02-04 00:46:00.932489 | orchestrator | 2026-02-04 00:46:00.932493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932509 | orchestrator | Wednesday 04 February 2026 00:45:56 +0000 (0:00:00.653) 0:00:03.909 **** 2026-02-04 00:46:00.932519 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52) 2026-02-04 00:46:00.932523 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52) 2026-02-04 00:46:00.932526 | orchestrator | 2026-02-04 00:46:00.932572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932577 | orchestrator | Wednesday 04 February 2026 00:45:56 +0000 (0:00:00.686) 0:00:04.595 **** 2026-02-04 00:46:00.932580 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd) 2026-02-04 00:46:00.932584 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd) 2026-02-04 00:46:00.932588 | orchestrator | 2026-02-04 00:46:00.932592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932595 | orchestrator | Wednesday 04 February 2026 00:45:57 +0000 (0:00:00.733) 0:00:05.329 **** 2026-02-04 00:46:00.932599 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089) 2026-02-04 00:46:00.932603 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089) 2026-02-04 00:46:00.932609 | orchestrator | 2026-02-04 00:46:00.932613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:00.932618 | orchestrator | Wednesday 04 February 2026 00:45:58 +0000 (0:00:00.966) 0:00:06.295 **** 2026-02-04 00:46:00.932622 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:46:00.932634 | orchestrator | 2026-02-04 00:46:00.932638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:00.932643 | orchestrator | Wednesday 04 February 2026 00:45:58 +0000 (0:00:00.350) 0:00:06.645 **** 2026-02-04 00:46:00.932648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-04 00:46:00.932652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-04 00:46:00.932657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-04 00:46:00.932662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-04 00:46:00.932666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-04 00:46:00.932671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-04 00:46:00.932676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-04 00:46:00.932681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-04 00:46:00.932686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-04 00:46:00.932692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-04 00:46:00.932698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-04 00:46:00.932705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-04 00:46:00.932717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-04 00:46:00.932725 | orchestrator | 2026-02-04 00:46:00.932732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:00.932738 | orchestrator | Wednesday 04 February 2026 00:45:59 +0000 (0:00:00.434) 0:00:07.080 **** 2026-02-04 00:46:00.932745 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932753 | orchestrator | 2026-02-04 00:46:00.932759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:00.932766 | orchestrator | Wednesday 04 February 2026 00:45:59 +0000 (0:00:00.244) 0:00:07.324 **** 2026-02-04 00:46:00.932777 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932785 | orchestrator | 2026-02-04 00:46:00.932791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:00.932796 | orchestrator | Wednesday 04 February 2026 00:45:59 +0000 (0:00:00.260) 0:00:07.585 **** 2026-02-04 00:46:00.932800 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932805 | orchestrator | 2026-02-04 00:46:00.932809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:00.932813 | orchestrator | Wednesday 04 February 2026 00:46:00 +0000 (0:00:00.222) 0:00:07.808 **** 2026-02-04 00:46:00.932817 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932821 | orchestrator | 2026-02-04 00:46:00.932825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:00.932829 | orchestrator | Wednesday 04 February 2026 00:46:00 +0000 (0:00:00.213) 0:00:08.021 **** 2026-02-04 00:46:00.932832 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932836 | orchestrator | 2026-02-04 00:46:00.932840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:00.932850 | orchestrator | Wednesday 04 February 2026 00:46:00 +0000 (0:00:00.219) 0:00:08.241 **** 2026-02-04 00:46:00.932854 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932858 | orchestrator | 2026-02-04 00:46:00.932862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:00.932866 | orchestrator | Wednesday 04 February 2026 00:46:00 +0000 (0:00:00.228) 0:00:08.469 **** 2026-02-04 00:46:00.932870 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:00.932874 | orchestrator | 2026-02-04 00:46:00.932882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:09.296249 | orchestrator | Wednesday 04 February 2026 00:46:00 +0000 (0:00:00.207) 0:00:08.676 **** 2026-02-04 00:46:09.296374 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.296396 | orchestrator | 2026-02-04 00:46:09.296414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:09.296429 | orchestrator | Wednesday 04 February 2026 00:46:01 +0000 (0:00:00.209) 0:00:08.886 **** 2026-02-04 00:46:09.296444 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-04 00:46:09.296459 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-04 00:46:09.296474 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-04 00:46:09.296489 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-04 00:46:09.296503 | orchestrator | 2026-02-04 00:46:09.296518 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:09.296590 | orchestrator | Wednesday 04 February 2026 00:46:02 +0000 (0:00:01.267) 0:00:10.153 **** 2026-02-04 00:46:09.296606 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.296622 | orchestrator | 2026-02-04 00:46:09.296638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:09.296653 | orchestrator | Wednesday 04 February 2026 00:46:02 +0000 (0:00:00.234) 0:00:10.387 **** 2026-02-04 00:46:09.296668 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.296683 | orchestrator | 2026-02-04 00:46:09.296698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:09.296713 | orchestrator | Wednesday 04 February 2026 00:46:02 +0000 (0:00:00.257) 0:00:10.645 **** 2026-02-04 00:46:09.296729 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.296743 | orchestrator | 2026-02-04 00:46:09.296758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:09.296773 | orchestrator | Wednesday 04 February 2026 00:46:03 +0000 (0:00:00.223) 0:00:10.869 **** 2026-02-04 00:46:09.296788 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.296803 | orchestrator | 2026-02-04 00:46:09.296820 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-04 00:46:09.296836 | orchestrator | Wednesday 04 February 2026 00:46:03 +0000 (0:00:00.235) 0:00:11.105 **** 2026-02-04 00:46:09.296852 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.296894 | orchestrator | 2026-02-04 00:46:09.296910 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-04 00:46:09.296926 | orchestrator | Wednesday 04 February 2026 00:46:03 +0000 (0:00:00.154) 0:00:11.260 **** 2026-02-04 00:46:09.296941 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cab1220b-9ff6-5009-b197-fa753e4036d2'}}) 2026-02-04 00:46:09.296956 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4adee4b4-d62b-5502-a742-8ac6c3138b01'}}) 2026-02-04 00:46:09.296971 | orchestrator | 2026-02-04 00:46:09.296999 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-04 00:46:09.297016 | orchestrator | Wednesday 04 February 2026 00:46:03 +0000 (0:00:00.203) 0:00:11.463 **** 2026-02-04 00:46:09.297032 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'}) 2026-02-04 00:46:09.297048 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'}) 2026-02-04 00:46:09.297063 | orchestrator | 2026-02-04 00:46:09.297078 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-04 00:46:09.297093 | orchestrator | Wednesday 04 February 2026 00:46:05 +0000 (0:00:01.834) 0:00:13.297 **** 2026-02-04 00:46:09.297108 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:09.297125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:09.297140 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297155 | orchestrator | 2026-02-04 00:46:09.297172 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-04 00:46:09.297187 | orchestrator | Wednesday 04 February 2026 00:46:05 +0000 (0:00:00.207) 0:00:13.504 **** 2026-02-04 00:46:09.297203 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'}) 2026-02-04 00:46:09.297218 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'}) 2026-02-04 00:46:09.297234 | orchestrator | 2026-02-04 00:46:09.297249 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-04 00:46:09.297265 | orchestrator | Wednesday 04 February 2026 00:46:07 +0000 (0:00:01.416) 0:00:14.920 **** 2026-02-04 00:46:09.297279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:09.297295 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:09.297311 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297326 | orchestrator | 2026-02-04 00:46:09.297340 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-04 00:46:09.297356 | orchestrator | Wednesday 04 February 2026 00:46:07 +0000 (0:00:00.202) 0:00:15.123 **** 2026-02-04 00:46:09.297394 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297405 | orchestrator | 2026-02-04 00:46:09.297415 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-04 00:46:09.297423 | orchestrator | Wednesday 04 February 2026 00:46:07 +0000 (0:00:00.184) 0:00:15.307 **** 2026-02-04 00:46:09.297432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:09.297441 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:09.297460 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297469 | orchestrator | 2026-02-04 00:46:09.297478 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-04 00:46:09.297487 | orchestrator | Wednesday 04 February 2026 00:46:07 +0000 (0:00:00.398) 0:00:15.706 **** 2026-02-04 00:46:09.297495 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297504 | orchestrator | 2026-02-04 00:46:09.297513 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-04 00:46:09.297521 | orchestrator | Wednesday 04 February 2026 00:46:08 +0000 (0:00:00.149) 0:00:15.855 **** 2026-02-04 00:46:09.297551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:09.297561 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:09.297570 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297579 | orchestrator | 2026-02-04 00:46:09.297588 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-04 00:46:09.297596 | orchestrator | Wednesday 04 February 2026 00:46:08 +0000 (0:00:00.165) 0:00:16.021 **** 2026-02-04 00:46:09.297605 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297613 | orchestrator | 2026-02-04 00:46:09.297622 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-04 00:46:09.297631 | orchestrator | Wednesday 04 February 2026 00:46:08 +0000 (0:00:00.135) 0:00:16.156 **** 2026-02-04 00:46:09.297639 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:09.297648 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:09.297657 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297666 | orchestrator | 2026-02-04 00:46:09.297674 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-04 00:46:09.297683 | orchestrator | Wednesday 04 February 2026 00:46:08 +0000 (0:00:00.147) 0:00:16.303 **** 2026-02-04 00:46:09.297692 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:46:09.297701 | orchestrator | 2026-02-04 00:46:09.297709 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-04 00:46:09.297718 | orchestrator | Wednesday 04 February 2026 00:46:08 +0000 (0:00:00.129) 0:00:16.433 **** 2026-02-04 00:46:09.297727 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:09.297735 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:09.297744 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297753 | orchestrator | 2026-02-04 00:46:09.297761 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-04 00:46:09.297770 | orchestrator | Wednesday 04 February 2026 00:46:08 +0000 (0:00:00.161) 0:00:16.594 **** 2026-02-04 00:46:09.297779 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:09.297788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:09.297796 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297805 | orchestrator | 2026-02-04 00:46:09.297814 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-04 00:46:09.297828 | orchestrator | Wednesday 04 February 2026 00:46:09 +0000 (0:00:00.160) 0:00:16.754 **** 2026-02-04 00:46:09.297837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:09.297845 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:09.297854 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297862 | orchestrator | 2026-02-04 00:46:09.297871 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-04 00:46:09.297880 | orchestrator | Wednesday 04 February 2026 00:46:09 +0000 (0:00:00.148) 0:00:16.903 **** 2026-02-04 00:46:09.297888 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:09.297897 | orchestrator | 2026-02-04 00:46:09.297906 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-04 00:46:09.297920 | orchestrator | Wednesday 04 February 2026 00:46:09 +0000 (0:00:00.139) 0:00:17.042 **** 2026-02-04 00:46:16.409922 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410066 | orchestrator | 2026-02-04 00:46:16.410081 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-04 00:46:16.410090 | orchestrator | Wednesday 04 February 2026 00:46:09 +0000 (0:00:00.128) 0:00:17.171 **** 2026-02-04 00:46:16.410097 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410122 | orchestrator | 2026-02-04 00:46:16.410130 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-04 00:46:16.410135 | orchestrator | Wednesday 04 February 2026 00:46:09 +0000 (0:00:00.152) 0:00:17.324 **** 2026-02-04 00:46:16.410152 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:46:16.410157 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-04 00:46:16.410161 | orchestrator | } 2026-02-04 00:46:16.410166 | orchestrator | 2026-02-04 00:46:16.410183 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-04 00:46:16.410188 | orchestrator | Wednesday 04 February 2026 00:46:09 +0000 (0:00:00.330) 0:00:17.654 **** 2026-02-04 00:46:16.410192 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:46:16.410198 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-04 00:46:16.410202 | orchestrator | } 2026-02-04 00:46:16.410206 | orchestrator | 2026-02-04 00:46:16.410210 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-04 00:46:16.410214 | orchestrator | Wednesday 04 February 2026 00:46:10 +0000 (0:00:00.178) 0:00:17.832 **** 2026-02-04 00:46:16.410218 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:46:16.410222 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-04 00:46:16.410226 | orchestrator | } 2026-02-04 00:46:16.410230 | orchestrator | 2026-02-04 00:46:16.410234 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-04 00:46:16.410238 | orchestrator | Wednesday 04 February 2026 00:46:10 +0000 (0:00:00.213) 0:00:18.045 **** 2026-02-04 00:46:16.410242 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:46:16.410258 | orchestrator | 2026-02-04 00:46:16.410262 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-04 00:46:16.410266 | orchestrator | Wednesday 04 February 2026 00:46:11 +0000 (0:00:00.809) 0:00:18.855 **** 2026-02-04 00:46:16.410270 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:46:16.410274 | orchestrator | 2026-02-04 00:46:16.410278 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-04 00:46:16.410281 | orchestrator | Wednesday 04 February 2026 00:46:11 +0000 (0:00:00.609) 0:00:19.465 **** 2026-02-04 00:46:16.410285 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:46:16.410289 | orchestrator | 2026-02-04 00:46:16.410293 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-04 00:46:16.410297 | orchestrator | Wednesday 04 February 2026 00:46:12 +0000 (0:00:00.583) 0:00:20.048 **** 2026-02-04 00:46:16.410301 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:46:16.410304 | orchestrator | 2026-02-04 00:46:16.410327 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-04 00:46:16.410331 | orchestrator | Wednesday 04 February 2026 00:46:12 +0000 (0:00:00.175) 0:00:20.223 **** 2026-02-04 00:46:16.410335 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410339 | orchestrator | 2026-02-04 00:46:16.410343 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-04 00:46:16.410347 | orchestrator | Wednesday 04 February 2026 00:46:12 +0000 (0:00:00.142) 0:00:20.366 **** 2026-02-04 00:46:16.410351 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410365 | orchestrator | 2026-02-04 00:46:16.410369 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-04 00:46:16.410373 | orchestrator | Wednesday 04 February 2026 00:46:12 +0000 (0:00:00.126) 0:00:20.493 **** 2026-02-04 00:46:16.410377 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:46:16.410381 | orchestrator |  "vgs_report": { 2026-02-04 00:46:16.410385 | orchestrator |  "vg": [] 2026-02-04 00:46:16.410400 | orchestrator |  } 2026-02-04 00:46:16.410404 | orchestrator | } 2026-02-04 00:46:16.410408 | orchestrator | 2026-02-04 00:46:16.410412 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-04 00:46:16.410416 | orchestrator | Wednesday 04 February 2026 00:46:12 +0000 (0:00:00.149) 0:00:20.642 **** 2026-02-04 00:46:16.410420 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410424 | orchestrator | 2026-02-04 00:46:16.410428 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-04 00:46:16.410431 | orchestrator | Wednesday 04 February 2026 00:46:13 +0000 (0:00:00.133) 0:00:20.776 **** 2026-02-04 00:46:16.410435 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410439 | orchestrator | 2026-02-04 00:46:16.410445 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-04 00:46:16.410451 | orchestrator | Wednesday 04 February 2026 00:46:13 +0000 (0:00:00.138) 0:00:20.915 **** 2026-02-04 00:46:16.410457 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410466 | orchestrator | 2026-02-04 00:46:16.410475 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-04 00:46:16.410481 | orchestrator | Wednesday 04 February 2026 00:46:13 +0000 (0:00:00.340) 0:00:21.256 **** 2026-02-04 00:46:16.410486 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410492 | orchestrator | 2026-02-04 00:46:16.410514 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-04 00:46:16.410521 | orchestrator | Wednesday 04 February 2026 00:46:13 +0000 (0:00:00.152) 0:00:21.408 **** 2026-02-04 00:46:16.410574 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410582 | orchestrator | 2026-02-04 00:46:16.410587 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-04 00:46:16.410619 | orchestrator | Wednesday 04 February 2026 00:46:13 +0000 (0:00:00.141) 0:00:21.550 **** 2026-02-04 00:46:16.410626 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410632 | orchestrator | 2026-02-04 00:46:16.410638 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-04 00:46:16.410663 | orchestrator | Wednesday 04 February 2026 00:46:13 +0000 (0:00:00.137) 0:00:21.687 **** 2026-02-04 00:46:16.410670 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410676 | orchestrator | 2026-02-04 00:46:16.410683 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-04 00:46:16.410689 | orchestrator | Wednesday 04 February 2026 00:46:14 +0000 (0:00:00.154) 0:00:21.842 **** 2026-02-04 00:46:16.410711 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410719 | orchestrator | 2026-02-04 00:46:16.410725 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-04 00:46:16.410731 | orchestrator | Wednesday 04 February 2026 00:46:14 +0000 (0:00:00.148) 0:00:21.991 **** 2026-02-04 00:46:16.410737 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410743 | orchestrator | 2026-02-04 00:46:16.410747 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-04 00:46:16.410771 | orchestrator | Wednesday 04 February 2026 00:46:14 +0000 (0:00:00.158) 0:00:22.149 **** 2026-02-04 00:46:16.410775 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410779 | orchestrator | 2026-02-04 00:46:16.410782 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-04 00:46:16.410786 | orchestrator | Wednesday 04 February 2026 00:46:14 +0000 (0:00:00.140) 0:00:22.290 **** 2026-02-04 00:46:16.410790 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410794 | orchestrator | 2026-02-04 00:46:16.410798 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-04 00:46:16.410813 | orchestrator | Wednesday 04 February 2026 00:46:14 +0000 (0:00:00.157) 0:00:22.448 **** 2026-02-04 00:46:16.410818 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410822 | orchestrator | 2026-02-04 00:46:16.410825 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-04 00:46:16.410829 | orchestrator | Wednesday 04 February 2026 00:46:14 +0000 (0:00:00.142) 0:00:22.590 **** 2026-02-04 00:46:16.410833 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410837 | orchestrator | 2026-02-04 00:46:16.410841 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-04 00:46:16.410845 | orchestrator | Wednesday 04 February 2026 00:46:15 +0000 (0:00:00.177) 0:00:22.767 **** 2026-02-04 00:46:16.410848 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410852 | orchestrator | 2026-02-04 00:46:16.410856 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-04 00:46:16.410860 | orchestrator | Wednesday 04 February 2026 00:46:15 +0000 (0:00:00.157) 0:00:22.924 **** 2026-02-04 00:46:16.410865 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:16.410871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:16.410875 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410879 | orchestrator | 2026-02-04 00:46:16.410883 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-04 00:46:16.410891 | orchestrator | Wednesday 04 February 2026 00:46:15 +0000 (0:00:00.421) 0:00:23.346 **** 2026-02-04 00:46:16.410906 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:16.410910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:16.410914 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410918 | orchestrator | 2026-02-04 00:46:16.410921 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-04 00:46:16.410925 | orchestrator | Wednesday 04 February 2026 00:46:15 +0000 (0:00:00.171) 0:00:23.518 **** 2026-02-04 00:46:16.410929 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:16.410933 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:16.410937 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410941 | orchestrator | 2026-02-04 00:46:16.410944 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-04 00:46:16.410948 | orchestrator | Wednesday 04 February 2026 00:46:15 +0000 (0:00:00.181) 0:00:23.700 **** 2026-02-04 00:46:16.410952 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:16.410956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:16.410972 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410976 | orchestrator | 2026-02-04 00:46:16.410980 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-04 00:46:16.410984 | orchestrator | Wednesday 04 February 2026 00:46:16 +0000 (0:00:00.186) 0:00:23.886 **** 2026-02-04 00:46:16.410988 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:16.410992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:16.410996 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:16.410999 | orchestrator | 2026-02-04 00:46:16.411003 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-04 00:46:16.411009 | orchestrator | Wednesday 04 February 2026 00:46:16 +0000 (0:00:00.194) 0:00:24.081 **** 2026-02-04 00:46:16.411020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:22.721071 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:22.721164 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:22.721178 | orchestrator | 2026-02-04 00:46:22.721188 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-04 00:46:22.721198 | orchestrator | Wednesday 04 February 2026 00:46:16 +0000 (0:00:00.187) 0:00:24.268 **** 2026-02-04 00:46:22.721207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:22.721216 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:22.721224 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:22.721233 | orchestrator | 2026-02-04 00:46:22.721241 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-04 00:46:22.721249 | orchestrator | Wednesday 04 February 2026 00:46:16 +0000 (0:00:00.163) 0:00:24.432 **** 2026-02-04 00:46:22.721258 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:22.721266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:22.721274 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:22.721282 | orchestrator | 2026-02-04 00:46:22.721291 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-04 00:46:22.721299 | orchestrator | Wednesday 04 February 2026 00:46:16 +0000 (0:00:00.170) 0:00:24.602 **** 2026-02-04 00:46:22.721307 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:46:22.721316 | orchestrator | 2026-02-04 00:46:22.721324 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-04 00:46:22.721333 | orchestrator | Wednesday 04 February 2026 00:46:17 +0000 (0:00:00.569) 0:00:25.171 **** 2026-02-04 00:46:22.721341 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:46:22.721349 | orchestrator | 2026-02-04 00:46:22.721357 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-04 00:46:22.721365 | orchestrator | Wednesday 04 February 2026 00:46:17 +0000 (0:00:00.569) 0:00:25.740 **** 2026-02-04 00:46:22.721373 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:46:22.721381 | orchestrator | 2026-02-04 00:46:22.721389 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-04 00:46:22.721398 | orchestrator | Wednesday 04 February 2026 00:46:18 +0000 (0:00:00.175) 0:00:25.916 **** 2026-02-04 00:46:22.721424 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'vg_name': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'}) 2026-02-04 00:46:22.721433 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'vg_name': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'}) 2026-02-04 00:46:22.721441 | orchestrator | 2026-02-04 00:46:22.721449 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-04 00:46:22.721457 | orchestrator | Wednesday 04 February 2026 00:46:18 +0000 (0:00:00.196) 0:00:26.113 **** 2026-02-04 00:46:22.721480 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:22.721488 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:22.721496 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:22.721504 | orchestrator | 2026-02-04 00:46:22.721512 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-04 00:46:22.721521 | orchestrator | Wednesday 04 February 2026 00:46:18 +0000 (0:00:00.401) 0:00:26.515 **** 2026-02-04 00:46:22.721589 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:22.721598 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:22.721606 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:22.721614 | orchestrator | 2026-02-04 00:46:22.721622 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-04 00:46:22.721630 | orchestrator | Wednesday 04 February 2026 00:46:18 +0000 (0:00:00.234) 0:00:26.750 **** 2026-02-04 00:46:22.721638 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'})  2026-02-04 00:46:22.721646 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'})  2026-02-04 00:46:22.721654 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:22.721662 | orchestrator | 2026-02-04 00:46:22.721670 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-04 00:46:22.721678 | orchestrator | Wednesday 04 February 2026 00:46:19 +0000 (0:00:00.194) 0:00:26.944 **** 2026-02-04 00:46:22.721702 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:46:22.721711 | orchestrator |  "lvm_report": { 2026-02-04 00:46:22.721719 | orchestrator |  "lv": [ 2026-02-04 00:46:22.721727 | orchestrator |  { 2026-02-04 00:46:22.721736 | orchestrator |  "lv_name": "osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01", 2026-02-04 00:46:22.721745 | orchestrator |  "vg_name": "ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01" 2026-02-04 00:46:22.721752 | orchestrator |  }, 2026-02-04 00:46:22.721760 | orchestrator |  { 2026-02-04 00:46:22.721769 | orchestrator |  "lv_name": "osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2", 2026-02-04 00:46:22.721777 | orchestrator |  "vg_name": "ceph-cab1220b-9ff6-5009-b197-fa753e4036d2" 2026-02-04 00:46:22.721785 | orchestrator |  } 2026-02-04 00:46:22.721793 | orchestrator |  ], 2026-02-04 00:46:22.721800 | orchestrator |  "pv": [ 2026-02-04 00:46:22.721808 | orchestrator |  { 2026-02-04 00:46:22.721816 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-04 00:46:22.721824 | orchestrator |  "vg_name": "ceph-cab1220b-9ff6-5009-b197-fa753e4036d2" 2026-02-04 00:46:22.721832 | orchestrator |  }, 2026-02-04 00:46:22.721840 | orchestrator |  { 2026-02-04 00:46:22.721855 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-04 00:46:22.721863 | orchestrator |  "vg_name": "ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01" 2026-02-04 00:46:22.721871 | orchestrator |  } 2026-02-04 00:46:22.721879 | orchestrator |  ] 2026-02-04 00:46:22.721887 | orchestrator |  } 2026-02-04 00:46:22.721902 | orchestrator | } 2026-02-04 00:46:22.721915 | orchestrator | 2026-02-04 00:46:22.721929 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-04 00:46:22.721941 | orchestrator | 2026-02-04 00:46:22.721952 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:46:22.721964 | orchestrator | Wednesday 04 February 2026 00:46:19 +0000 (0:00:00.406) 0:00:27.351 **** 2026-02-04 00:46:22.721975 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-04 00:46:22.721987 | orchestrator | 2026-02-04 00:46:22.721999 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:46:22.722012 | orchestrator | Wednesday 04 February 2026 00:46:20 +0000 (0:00:00.442) 0:00:27.794 **** 2026-02-04 00:46:22.722083 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:46:22.722095 | orchestrator | 2026-02-04 00:46:22.722109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:22.722122 | orchestrator | Wednesday 04 February 2026 00:46:20 +0000 (0:00:00.316) 0:00:28.110 **** 2026-02-04 00:46:22.722142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-04 00:46:22.722156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-04 00:46:22.722169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-04 00:46:22.722183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-04 00:46:22.722197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-04 00:46:22.722209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-04 00:46:22.722223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-04 00:46:22.722236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-04 00:46:22.722251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-04 00:46:22.722259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-04 00:46:22.722267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-04 00:46:22.722275 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-04 00:46:22.722282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-04 00:46:22.722290 | orchestrator | 2026-02-04 00:46:22.722298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:22.722306 | orchestrator | Wednesday 04 February 2026 00:46:20 +0000 (0:00:00.463) 0:00:28.574 **** 2026-02-04 00:46:22.722314 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:22.722322 | orchestrator | 2026-02-04 00:46:22.722330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:22.722338 | orchestrator | Wednesday 04 February 2026 00:46:21 +0000 (0:00:00.228) 0:00:28.802 **** 2026-02-04 00:46:22.722345 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:22.722353 | orchestrator | 2026-02-04 00:46:22.722361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:22.722369 | orchestrator | Wednesday 04 February 2026 00:46:21 +0000 (0:00:00.221) 0:00:29.024 **** 2026-02-04 00:46:22.722378 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:22.722391 | orchestrator | 2026-02-04 00:46:22.722404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:22.722426 | orchestrator | Wednesday 04 February 2026 00:46:21 +0000 (0:00:00.718) 0:00:29.743 **** 2026-02-04 00:46:22.722438 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:22.722451 | orchestrator | 2026-02-04 00:46:22.722464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:22.722477 | orchestrator | Wednesday 04 February 2026 00:46:22 +0000 (0:00:00.200) 0:00:29.944 **** 2026-02-04 00:46:22.722489 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:22.722503 | orchestrator | 2026-02-04 00:46:22.722514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:22.722548 | orchestrator | Wednesday 04 February 2026 00:46:22 +0000 (0:00:00.253) 0:00:30.198 **** 2026-02-04 00:46:22.722562 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:22.722577 | orchestrator | 2026-02-04 00:46:22.722602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:35.053416 | orchestrator | Wednesday 04 February 2026 00:46:22 +0000 (0:00:00.264) 0:00:30.462 **** 2026-02-04 00:46:35.053511 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.053522 | orchestrator | 2026-02-04 00:46:35.053598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:35.053605 | orchestrator | Wednesday 04 February 2026 00:46:22 +0000 (0:00:00.258) 0:00:30.721 **** 2026-02-04 00:46:35.053612 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.053619 | orchestrator | 2026-02-04 00:46:35.053625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:35.053632 | orchestrator | Wednesday 04 February 2026 00:46:23 +0000 (0:00:00.220) 0:00:30.942 **** 2026-02-04 00:46:35.053639 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e) 2026-02-04 00:46:35.053647 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e) 2026-02-04 00:46:35.053654 | orchestrator | 2026-02-04 00:46:35.053660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:35.053666 | orchestrator | Wednesday 04 February 2026 00:46:23 +0000 (0:00:00.477) 0:00:31.419 **** 2026-02-04 00:46:35.053673 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d) 2026-02-04 00:46:35.053679 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d) 2026-02-04 00:46:35.053690 | orchestrator | 2026-02-04 00:46:35.053697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:35.053703 | orchestrator | Wednesday 04 February 2026 00:46:24 +0000 (0:00:00.551) 0:00:31.971 **** 2026-02-04 00:46:35.053710 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d) 2026-02-04 00:46:35.053716 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d) 2026-02-04 00:46:35.053722 | orchestrator | 2026-02-04 00:46:35.053728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:35.053735 | orchestrator | Wednesday 04 February 2026 00:46:24 +0000 (0:00:00.480) 0:00:32.451 **** 2026-02-04 00:46:35.053755 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4) 2026-02-04 00:46:35.053761 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4) 2026-02-04 00:46:35.053772 | orchestrator | 2026-02-04 00:46:35.053778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:35.053784 | orchestrator | Wednesday 04 February 2026 00:46:25 +0000 (0:00:00.686) 0:00:33.138 **** 2026-02-04 00:46:35.053789 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:46:35.053795 | orchestrator | 2026-02-04 00:46:35.053800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.053806 | orchestrator | Wednesday 04 February 2026 00:46:26 +0000 (0:00:00.991) 0:00:34.129 **** 2026-02-04 00:46:35.053833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-04 00:46:35.053841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-04 00:46:35.053847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-04 00:46:35.053853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-04 00:46:35.053860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-04 00:46:35.053866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-04 00:46:35.053872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-04 00:46:35.053879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-04 00:46:35.053885 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-04 00:46:35.053891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-04 00:46:35.053897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-04 00:46:35.053903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-04 00:46:35.053910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-04 00:46:35.053917 | orchestrator | 2026-02-04 00:46:35.053924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.053930 | orchestrator | Wednesday 04 February 2026 00:46:27 +0000 (0:00:00.948) 0:00:35.078 **** 2026-02-04 00:46:35.053937 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.053944 | orchestrator | 2026-02-04 00:46:35.053950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.053964 | orchestrator | Wednesday 04 February 2026 00:46:27 +0000 (0:00:00.209) 0:00:35.287 **** 2026-02-04 00:46:35.053971 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.053978 | orchestrator | 2026-02-04 00:46:35.053985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.053992 | orchestrator | Wednesday 04 February 2026 00:46:27 +0000 (0:00:00.209) 0:00:35.497 **** 2026-02-04 00:46:35.053999 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054006 | orchestrator | 2026-02-04 00:46:35.054072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.054081 | orchestrator | Wednesday 04 February 2026 00:46:27 +0000 (0:00:00.214) 0:00:35.711 **** 2026-02-04 00:46:35.054088 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054095 | orchestrator | 2026-02-04 00:46:35.054102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.054110 | orchestrator | Wednesday 04 February 2026 00:46:28 +0000 (0:00:00.225) 0:00:35.937 **** 2026-02-04 00:46:35.054116 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054123 | orchestrator | 2026-02-04 00:46:35.054133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.054140 | orchestrator | Wednesday 04 February 2026 00:46:28 +0000 (0:00:00.205) 0:00:36.143 **** 2026-02-04 00:46:35.054147 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054154 | orchestrator | 2026-02-04 00:46:35.054161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.054168 | orchestrator | Wednesday 04 February 2026 00:46:28 +0000 (0:00:00.232) 0:00:36.375 **** 2026-02-04 00:46:35.054175 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054181 | orchestrator | 2026-02-04 00:46:35.054188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.054195 | orchestrator | Wednesday 04 February 2026 00:46:28 +0000 (0:00:00.214) 0:00:36.589 **** 2026-02-04 00:46:35.054207 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054215 | orchestrator | 2026-02-04 00:46:35.054221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.054227 | orchestrator | Wednesday 04 February 2026 00:46:29 +0000 (0:00:00.221) 0:00:36.811 **** 2026-02-04 00:46:35.054233 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-04 00:46:35.054239 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-04 00:46:35.054248 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-04 00:46:35.054257 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-04 00:46:35.054263 | orchestrator | 2026-02-04 00:46:35.054269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.054274 | orchestrator | Wednesday 04 February 2026 00:46:30 +0000 (0:00:00.941) 0:00:37.752 **** 2026-02-04 00:46:35.054281 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054288 | orchestrator | 2026-02-04 00:46:35.054294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.054301 | orchestrator | Wednesday 04 February 2026 00:46:30 +0000 (0:00:00.201) 0:00:37.954 **** 2026-02-04 00:46:35.054308 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054314 | orchestrator | 2026-02-04 00:46:35.054321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.054342 | orchestrator | Wednesday 04 February 2026 00:46:30 +0000 (0:00:00.741) 0:00:38.695 **** 2026-02-04 00:46:35.054350 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054356 | orchestrator | 2026-02-04 00:46:35.054362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:35.054369 | orchestrator | Wednesday 04 February 2026 00:46:31 +0000 (0:00:00.223) 0:00:38.919 **** 2026-02-04 00:46:35.054376 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054382 | orchestrator | 2026-02-04 00:46:35.054389 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-04 00:46:35.054396 | orchestrator | Wednesday 04 February 2026 00:46:31 +0000 (0:00:00.218) 0:00:39.138 **** 2026-02-04 00:46:35.054402 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054409 | orchestrator | 2026-02-04 00:46:35.054416 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-04 00:46:35.054422 | orchestrator | Wednesday 04 February 2026 00:46:31 +0000 (0:00:00.147) 0:00:39.285 **** 2026-02-04 00:46:35.054429 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6cd3944c-50dd-590e-9699-94e09e9b1959'}}) 2026-02-04 00:46:35.054436 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '197bc0b1-bda8-5def-b850-786176b935dd'}}) 2026-02-04 00:46:35.054443 | orchestrator | 2026-02-04 00:46:35.054450 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-04 00:46:35.054456 | orchestrator | Wednesday 04 February 2026 00:46:31 +0000 (0:00:00.235) 0:00:39.520 **** 2026-02-04 00:46:35.054465 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'}) 2026-02-04 00:46:35.054472 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'}) 2026-02-04 00:46:35.054479 | orchestrator | 2026-02-04 00:46:35.054485 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-04 00:46:35.054492 | orchestrator | Wednesday 04 February 2026 00:46:33 +0000 (0:00:01.860) 0:00:41.381 **** 2026-02-04 00:46:35.054498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:35.054506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:35.054518 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:35.054541 | orchestrator | 2026-02-04 00:46:35.054548 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-04 00:46:35.054554 | orchestrator | Wednesday 04 February 2026 00:46:33 +0000 (0:00:00.184) 0:00:41.565 **** 2026-02-04 00:46:35.054560 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'}) 2026-02-04 00:46:35.054573 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'}) 2026-02-04 00:46:42.156125 | orchestrator | 2026-02-04 00:46:42.156243 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-04 00:46:42.156262 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:01.317) 0:00:42.883 **** 2026-02-04 00:46:42.156275 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:42.156288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:42.156300 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.156314 | orchestrator | 2026-02-04 00:46:42.156325 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-04 00:46:42.156337 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.162) 0:00:43.045 **** 2026-02-04 00:46:42.156348 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.156359 | orchestrator | 2026-02-04 00:46:42.156370 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-04 00:46:42.156381 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.146) 0:00:43.192 **** 2026-02-04 00:46:42.156392 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:42.156403 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:42.156414 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.156425 | orchestrator | 2026-02-04 00:46:42.156436 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-04 00:46:42.156447 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.162) 0:00:43.354 **** 2026-02-04 00:46:42.156458 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.156469 | orchestrator | 2026-02-04 00:46:42.156479 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-04 00:46:42.156508 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.159) 0:00:43.513 **** 2026-02-04 00:46:42.156519 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:42.156570 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:42.156582 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.156593 | orchestrator | 2026-02-04 00:46:42.156604 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-04 00:46:42.156615 | orchestrator | Wednesday 04 February 2026 00:46:36 +0000 (0:00:00.455) 0:00:43.969 **** 2026-02-04 00:46:42.156626 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.156636 | orchestrator | 2026-02-04 00:46:42.156647 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-04 00:46:42.156658 | orchestrator | Wednesday 04 February 2026 00:46:36 +0000 (0:00:00.165) 0:00:44.135 **** 2026-02-04 00:46:42.156669 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:42.156705 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:42.156717 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.156728 | orchestrator | 2026-02-04 00:46:42.156740 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-04 00:46:42.156751 | orchestrator | Wednesday 04 February 2026 00:46:36 +0000 (0:00:00.201) 0:00:44.336 **** 2026-02-04 00:46:42.156762 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:46:42.156774 | orchestrator | 2026-02-04 00:46:42.156785 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-04 00:46:42.156796 | orchestrator | Wednesday 04 February 2026 00:46:36 +0000 (0:00:00.183) 0:00:44.520 **** 2026-02-04 00:46:42.156807 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:42.156818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:42.156829 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.156840 | orchestrator | 2026-02-04 00:46:42.156851 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-04 00:46:42.156862 | orchestrator | Wednesday 04 February 2026 00:46:36 +0000 (0:00:00.183) 0:00:44.703 **** 2026-02-04 00:46:42.156873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:42.156884 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:42.156895 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.156906 | orchestrator | 2026-02-04 00:46:42.156917 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-04 00:46:42.156945 | orchestrator | Wednesday 04 February 2026 00:46:37 +0000 (0:00:00.187) 0:00:44.891 **** 2026-02-04 00:46:42.156957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:42.156968 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:42.156979 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.156990 | orchestrator | 2026-02-04 00:46:42.157001 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-04 00:46:42.157013 | orchestrator | Wednesday 04 February 2026 00:46:37 +0000 (0:00:00.180) 0:00:45.071 **** 2026-02-04 00:46:42.157024 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.157035 | orchestrator | 2026-02-04 00:46:42.157046 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-04 00:46:42.157057 | orchestrator | Wednesday 04 February 2026 00:46:37 +0000 (0:00:00.193) 0:00:45.265 **** 2026-02-04 00:46:42.157068 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.157079 | orchestrator | 2026-02-04 00:46:42.157090 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-04 00:46:42.157101 | orchestrator | Wednesday 04 February 2026 00:46:37 +0000 (0:00:00.186) 0:00:45.452 **** 2026-02-04 00:46:42.157112 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.157123 | orchestrator | 2026-02-04 00:46:42.157135 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-04 00:46:42.157145 | orchestrator | Wednesday 04 February 2026 00:46:37 +0000 (0:00:00.186) 0:00:45.639 **** 2026-02-04 00:46:42.157156 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:46:42.157168 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-04 00:46:42.157187 | orchestrator | } 2026-02-04 00:46:42.157199 | orchestrator | 2026-02-04 00:46:42.157211 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-04 00:46:42.157222 | orchestrator | Wednesday 04 February 2026 00:46:38 +0000 (0:00:00.233) 0:00:45.872 **** 2026-02-04 00:46:42.157233 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:46:42.157244 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-04 00:46:42.157255 | orchestrator | } 2026-02-04 00:46:42.157266 | orchestrator | 2026-02-04 00:46:42.157283 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-04 00:46:42.157317 | orchestrator | Wednesday 04 February 2026 00:46:38 +0000 (0:00:00.163) 0:00:46.036 **** 2026-02-04 00:46:42.157341 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:46:42.157354 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-04 00:46:42.157365 | orchestrator | } 2026-02-04 00:46:42.157376 | orchestrator | 2026-02-04 00:46:42.157387 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-04 00:46:42.157398 | orchestrator | Wednesday 04 February 2026 00:46:38 +0000 (0:00:00.564) 0:00:46.601 **** 2026-02-04 00:46:42.157409 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:46:42.157438 | orchestrator | 2026-02-04 00:46:42.157449 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-04 00:46:42.157461 | orchestrator | Wednesday 04 February 2026 00:46:39 +0000 (0:00:00.613) 0:00:47.215 **** 2026-02-04 00:46:42.157472 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:46:42.157496 | orchestrator | 2026-02-04 00:46:42.157508 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-04 00:46:42.157519 | orchestrator | Wednesday 04 February 2026 00:46:40 +0000 (0:00:00.586) 0:00:47.801 **** 2026-02-04 00:46:42.157556 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:46:42.157575 | orchestrator | 2026-02-04 00:46:42.157594 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-04 00:46:42.157614 | orchestrator | Wednesday 04 February 2026 00:46:40 +0000 (0:00:00.552) 0:00:48.354 **** 2026-02-04 00:46:42.157632 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:46:42.157648 | orchestrator | 2026-02-04 00:46:42.157659 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-04 00:46:42.157670 | orchestrator | Wednesday 04 February 2026 00:46:40 +0000 (0:00:00.212) 0:00:48.566 **** 2026-02-04 00:46:42.157681 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.157692 | orchestrator | 2026-02-04 00:46:42.157703 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-04 00:46:42.157714 | orchestrator | Wednesday 04 February 2026 00:46:40 +0000 (0:00:00.136) 0:00:48.702 **** 2026-02-04 00:46:42.157725 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.157736 | orchestrator | 2026-02-04 00:46:42.157747 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-04 00:46:42.157758 | orchestrator | Wednesday 04 February 2026 00:46:41 +0000 (0:00:00.146) 0:00:48.849 **** 2026-02-04 00:46:42.157769 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:46:42.157780 | orchestrator |  "vgs_report": { 2026-02-04 00:46:42.157792 | orchestrator |  "vg": [] 2026-02-04 00:46:42.157803 | orchestrator |  } 2026-02-04 00:46:42.157815 | orchestrator | } 2026-02-04 00:46:42.157826 | orchestrator | 2026-02-04 00:46:42.157837 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-04 00:46:42.157848 | orchestrator | Wednesday 04 February 2026 00:46:41 +0000 (0:00:00.216) 0:00:49.066 **** 2026-02-04 00:46:42.157859 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.157870 | orchestrator | 2026-02-04 00:46:42.157882 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-04 00:46:42.157893 | orchestrator | Wednesday 04 February 2026 00:46:41 +0000 (0:00:00.222) 0:00:49.288 **** 2026-02-04 00:46:42.157904 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.157915 | orchestrator | 2026-02-04 00:46:42.157926 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-04 00:46:42.157945 | orchestrator | Wednesday 04 February 2026 00:46:41 +0000 (0:00:00.230) 0:00:49.519 **** 2026-02-04 00:46:42.157956 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.157967 | orchestrator | 2026-02-04 00:46:42.157979 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-04 00:46:42.157990 | orchestrator | Wednesday 04 February 2026 00:46:41 +0000 (0:00:00.227) 0:00:49.747 **** 2026-02-04 00:46:42.158001 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:42.158066 | orchestrator | 2026-02-04 00:46:42.158088 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-04 00:46:47.804446 | orchestrator | Wednesday 04 February 2026 00:46:42 +0000 (0:00:00.153) 0:00:49.900 **** 2026-02-04 00:46:47.804558 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804568 | orchestrator | 2026-02-04 00:46:47.804573 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-04 00:46:47.804578 | orchestrator | Wednesday 04 February 2026 00:46:42 +0000 (0:00:00.537) 0:00:50.438 **** 2026-02-04 00:46:47.804582 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804586 | orchestrator | 2026-02-04 00:46:47.804591 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-04 00:46:47.804595 | orchestrator | Wednesday 04 February 2026 00:46:42 +0000 (0:00:00.214) 0:00:50.652 **** 2026-02-04 00:46:47.804599 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804604 | orchestrator | 2026-02-04 00:46:47.804608 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-04 00:46:47.804612 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:00.197) 0:00:50.850 **** 2026-02-04 00:46:47.804616 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804620 | orchestrator | 2026-02-04 00:46:47.804624 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-04 00:46:47.804628 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:00.233) 0:00:51.083 **** 2026-02-04 00:46:47.804632 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804636 | orchestrator | 2026-02-04 00:46:47.804640 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-04 00:46:47.804645 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:00.161) 0:00:51.245 **** 2026-02-04 00:46:47.804649 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804653 | orchestrator | 2026-02-04 00:46:47.804657 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-04 00:46:47.804661 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:00.161) 0:00:51.406 **** 2026-02-04 00:46:47.804665 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804669 | orchestrator | 2026-02-04 00:46:47.804673 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-04 00:46:47.804677 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:00.162) 0:00:51.568 **** 2026-02-04 00:46:47.804681 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804686 | orchestrator | 2026-02-04 00:46:47.804690 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-04 00:46:47.804694 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:00.169) 0:00:51.738 **** 2026-02-04 00:46:47.804698 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804703 | orchestrator | 2026-02-04 00:46:47.804707 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-04 00:46:47.804711 | orchestrator | Wednesday 04 February 2026 00:46:44 +0000 (0:00:00.151) 0:00:51.890 **** 2026-02-04 00:46:47.804715 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804722 | orchestrator | 2026-02-04 00:46:47.804728 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-04 00:46:47.804735 | orchestrator | Wednesday 04 February 2026 00:46:44 +0000 (0:00:00.155) 0:00:52.045 **** 2026-02-04 00:46:47.804743 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:47.804787 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:47.804795 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804802 | orchestrator | 2026-02-04 00:46:47.804808 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-04 00:46:47.804815 | orchestrator | Wednesday 04 February 2026 00:46:44 +0000 (0:00:00.188) 0:00:52.234 **** 2026-02-04 00:46:47.804821 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:47.804830 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:47.804834 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804838 | orchestrator | 2026-02-04 00:46:47.804843 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-04 00:46:47.804847 | orchestrator | Wednesday 04 February 2026 00:46:44 +0000 (0:00:00.173) 0:00:52.408 **** 2026-02-04 00:46:47.804851 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:47.804855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:47.804859 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804863 | orchestrator | 2026-02-04 00:46:47.804867 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-04 00:46:47.804871 | orchestrator | Wednesday 04 February 2026 00:46:45 +0000 (0:00:00.477) 0:00:52.885 **** 2026-02-04 00:46:47.804875 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:47.804879 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:47.804883 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804887 | orchestrator | 2026-02-04 00:46:47.804904 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-04 00:46:47.804908 | orchestrator | Wednesday 04 February 2026 00:46:45 +0000 (0:00:00.194) 0:00:53.080 **** 2026-02-04 00:46:47.804912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:47.804917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:47.804921 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804925 | orchestrator | 2026-02-04 00:46:47.804929 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-04 00:46:47.804933 | orchestrator | Wednesday 04 February 2026 00:46:45 +0000 (0:00:00.168) 0:00:53.248 **** 2026-02-04 00:46:47.804937 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:47.804941 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:47.804945 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804949 | orchestrator | 2026-02-04 00:46:47.804953 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-04 00:46:47.804957 | orchestrator | Wednesday 04 February 2026 00:46:45 +0000 (0:00:00.177) 0:00:53.426 **** 2026-02-04 00:46:47.804961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:47.804974 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:47.804978 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.804983 | orchestrator | 2026-02-04 00:46:47.804988 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-04 00:46:47.804993 | orchestrator | Wednesday 04 February 2026 00:46:45 +0000 (0:00:00.184) 0:00:53.611 **** 2026-02-04 00:46:47.804997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:47.805002 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:47.805007 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.805012 | orchestrator | 2026-02-04 00:46:47.805016 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-04 00:46:47.805021 | orchestrator | Wednesday 04 February 2026 00:46:46 +0000 (0:00:00.181) 0:00:53.792 **** 2026-02-04 00:46:47.805026 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:46:47.805030 | orchestrator | 2026-02-04 00:46:47.805035 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-04 00:46:47.805040 | orchestrator | Wednesday 04 February 2026 00:46:46 +0000 (0:00:00.571) 0:00:54.363 **** 2026-02-04 00:46:47.805045 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:46:47.805050 | orchestrator | 2026-02-04 00:46:47.805054 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-04 00:46:47.805059 | orchestrator | Wednesday 04 February 2026 00:46:47 +0000 (0:00:00.564) 0:00:54.928 **** 2026-02-04 00:46:47.805064 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:46:47.805068 | orchestrator | 2026-02-04 00:46:47.805072 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-04 00:46:47.805076 | orchestrator | Wednesday 04 February 2026 00:46:47 +0000 (0:00:00.159) 0:00:55.087 **** 2026-02-04 00:46:47.805080 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'vg_name': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'}) 2026-02-04 00:46:47.805085 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'vg_name': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'}) 2026-02-04 00:46:47.805089 | orchestrator | 2026-02-04 00:46:47.805093 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-04 00:46:47.805097 | orchestrator | Wednesday 04 February 2026 00:46:47 +0000 (0:00:00.189) 0:00:55.277 **** 2026-02-04 00:46:47.805101 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:47.805105 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:47.805109 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:47.805113 | orchestrator | 2026-02-04 00:46:47.805117 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-04 00:46:47.805121 | orchestrator | Wednesday 04 February 2026 00:46:47 +0000 (0:00:00.185) 0:00:55.463 **** 2026-02-04 00:46:47.805125 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:47.805132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:54.934221 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:54.934305 | orchestrator | 2026-02-04 00:46:54.934331 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-04 00:46:54.934337 | orchestrator | Wednesday 04 February 2026 00:46:47 +0000 (0:00:00.190) 0:00:55.654 **** 2026-02-04 00:46:54.934342 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'})  2026-02-04 00:46:54.934348 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'})  2026-02-04 00:46:54.934352 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:54.934356 | orchestrator | 2026-02-04 00:46:54.934360 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-04 00:46:54.934364 | orchestrator | Wednesday 04 February 2026 00:46:48 +0000 (0:00:00.160) 0:00:55.814 **** 2026-02-04 00:46:54.934368 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:46:54.934372 | orchestrator |  "lvm_report": { 2026-02-04 00:46:54.934377 | orchestrator |  "lv": [ 2026-02-04 00:46:54.934381 | orchestrator |  { 2026-02-04 00:46:54.934385 | orchestrator |  "lv_name": "osd-block-197bc0b1-bda8-5def-b850-786176b935dd", 2026-02-04 00:46:54.934390 | orchestrator |  "vg_name": "ceph-197bc0b1-bda8-5def-b850-786176b935dd" 2026-02-04 00:46:54.934394 | orchestrator |  }, 2026-02-04 00:46:54.934398 | orchestrator |  { 2026-02-04 00:46:54.934401 | orchestrator |  "lv_name": "osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959", 2026-02-04 00:46:54.934405 | orchestrator |  "vg_name": "ceph-6cd3944c-50dd-590e-9699-94e09e9b1959" 2026-02-04 00:46:54.934409 | orchestrator |  } 2026-02-04 00:46:54.934413 | orchestrator |  ], 2026-02-04 00:46:54.934417 | orchestrator |  "pv": [ 2026-02-04 00:46:54.934421 | orchestrator |  { 2026-02-04 00:46:54.934424 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-04 00:46:54.934438 | orchestrator |  "vg_name": "ceph-6cd3944c-50dd-590e-9699-94e09e9b1959" 2026-02-04 00:46:54.934442 | orchestrator |  }, 2026-02-04 00:46:54.934446 | orchestrator |  { 2026-02-04 00:46:54.934450 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-04 00:46:54.934453 | orchestrator |  "vg_name": "ceph-197bc0b1-bda8-5def-b850-786176b935dd" 2026-02-04 00:46:54.934457 | orchestrator |  } 2026-02-04 00:46:54.934461 | orchestrator |  ] 2026-02-04 00:46:54.934465 | orchestrator |  } 2026-02-04 00:46:54.934469 | orchestrator | } 2026-02-04 00:46:54.934473 | orchestrator | 2026-02-04 00:46:54.934477 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-04 00:46:54.934481 | orchestrator | 2026-02-04 00:46:54.934485 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:46:54.934489 | orchestrator | Wednesday 04 February 2026 00:46:48 +0000 (0:00:00.642) 0:00:56.456 **** 2026-02-04 00:46:54.934493 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-04 00:46:54.934496 | orchestrator | 2026-02-04 00:46:54.934500 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:46:54.934504 | orchestrator | Wednesday 04 February 2026 00:46:48 +0000 (0:00:00.293) 0:00:56.749 **** 2026-02-04 00:46:54.934508 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:46:54.934512 | orchestrator | 2026-02-04 00:46:54.934515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934536 | orchestrator | Wednesday 04 February 2026 00:46:49 +0000 (0:00:00.343) 0:00:57.093 **** 2026-02-04 00:46:54.934543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-04 00:46:54.934549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-04 00:46:54.934553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-04 00:46:54.934557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-04 00:46:54.934564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-04 00:46:54.934568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-04 00:46:54.934571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-04 00:46:54.934575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-04 00:46:54.934579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-04 00:46:54.934585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-04 00:46:54.934589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-04 00:46:54.934593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-04 00:46:54.934596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-04 00:46:54.934600 | orchestrator | 2026-02-04 00:46:54.934604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934608 | orchestrator | Wednesday 04 February 2026 00:46:49 +0000 (0:00:00.487) 0:00:57.581 **** 2026-02-04 00:46:54.934611 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:54.934615 | orchestrator | 2026-02-04 00:46:54.934619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934623 | orchestrator | Wednesday 04 February 2026 00:46:50 +0000 (0:00:00.263) 0:00:57.845 **** 2026-02-04 00:46:54.934627 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:54.934630 | orchestrator | 2026-02-04 00:46:54.934634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934651 | orchestrator | Wednesday 04 February 2026 00:46:50 +0000 (0:00:00.225) 0:00:58.070 **** 2026-02-04 00:46:54.934655 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:54.934658 | orchestrator | 2026-02-04 00:46:54.934662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934666 | orchestrator | Wednesday 04 February 2026 00:46:50 +0000 (0:00:00.222) 0:00:58.292 **** 2026-02-04 00:46:54.934670 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:54.934674 | orchestrator | 2026-02-04 00:46:54.934677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934681 | orchestrator | Wednesday 04 February 2026 00:46:50 +0000 (0:00:00.224) 0:00:58.517 **** 2026-02-04 00:46:54.934685 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:54.934689 | orchestrator | 2026-02-04 00:46:54.934693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934696 | orchestrator | Wednesday 04 February 2026 00:46:51 +0000 (0:00:00.822) 0:00:59.339 **** 2026-02-04 00:46:54.934700 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:54.934704 | orchestrator | 2026-02-04 00:46:54.934708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934712 | orchestrator | Wednesday 04 February 2026 00:46:51 +0000 (0:00:00.224) 0:00:59.564 **** 2026-02-04 00:46:54.934715 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:54.934719 | orchestrator | 2026-02-04 00:46:54.934723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934727 | orchestrator | Wednesday 04 February 2026 00:46:52 +0000 (0:00:00.252) 0:00:59.817 **** 2026-02-04 00:46:54.934731 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:54.934735 | orchestrator | 2026-02-04 00:46:54.934738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934742 | orchestrator | Wednesday 04 February 2026 00:46:52 +0000 (0:00:00.208) 0:01:00.025 **** 2026-02-04 00:46:54.934746 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14) 2026-02-04 00:46:54.934752 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14) 2026-02-04 00:46:54.934759 | orchestrator | 2026-02-04 00:46:54.934763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934767 | orchestrator | Wednesday 04 February 2026 00:46:52 +0000 (0:00:00.455) 0:01:00.481 **** 2026-02-04 00:46:54.934770 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08) 2026-02-04 00:46:54.934774 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08) 2026-02-04 00:46:54.934778 | orchestrator | 2026-02-04 00:46:54.934782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934786 | orchestrator | Wednesday 04 February 2026 00:46:53 +0000 (0:00:00.475) 0:01:00.956 **** 2026-02-04 00:46:54.934790 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508) 2026-02-04 00:46:54.934793 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508) 2026-02-04 00:46:54.934797 | orchestrator | 2026-02-04 00:46:54.934801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934805 | orchestrator | Wednesday 04 February 2026 00:46:53 +0000 (0:00:00.469) 0:01:01.426 **** 2026-02-04 00:46:54.934809 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89) 2026-02-04 00:46:54.934813 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89) 2026-02-04 00:46:54.934816 | orchestrator | 2026-02-04 00:46:54.934820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:46:54.934824 | orchestrator | Wednesday 04 February 2026 00:46:54 +0000 (0:00:00.509) 0:01:01.936 **** 2026-02-04 00:46:54.934828 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:46:54.934831 | orchestrator | 2026-02-04 00:46:54.934835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:46:54.934839 | orchestrator | Wednesday 04 February 2026 00:46:54 +0000 (0:00:00.392) 0:01:02.328 **** 2026-02-04 00:46:54.934843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-04 00:46:54.934846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-04 00:46:54.934850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-04 00:46:54.934854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-04 00:46:54.934858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-04 00:46:54.934862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-04 00:46:54.934865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-04 00:46:54.934869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-04 00:46:54.934873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-04 00:46:54.934876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-04 00:46:54.934883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-04 00:46:54.934893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-04 00:47:05.518373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-04 00:47:05.518485 | orchestrator | 2026-02-04 00:47:05.518502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.518516 | orchestrator | Wednesday 04 February 2026 00:46:55 +0000 (0:00:00.444) 0:01:02.773 **** 2026-02-04 00:47:05.518673 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.518690 | orchestrator | 2026-02-04 00:47:05.518703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.518716 | orchestrator | Wednesday 04 February 2026 00:46:55 +0000 (0:00:00.254) 0:01:03.027 **** 2026-02-04 00:47:05.518729 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.518742 | orchestrator | 2026-02-04 00:47:05.518808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.518822 | orchestrator | Wednesday 04 February 2026 00:46:56 +0000 (0:00:00.910) 0:01:03.938 **** 2026-02-04 00:47:05.518835 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.518846 | orchestrator | 2026-02-04 00:47:05.518858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.518869 | orchestrator | Wednesday 04 February 2026 00:46:56 +0000 (0:00:00.258) 0:01:04.197 **** 2026-02-04 00:47:05.518882 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.518894 | orchestrator | 2026-02-04 00:47:05.518905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.518916 | orchestrator | Wednesday 04 February 2026 00:46:56 +0000 (0:00:00.248) 0:01:04.445 **** 2026-02-04 00:47:05.518928 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.518940 | orchestrator | 2026-02-04 00:47:05.518951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.518962 | orchestrator | Wednesday 04 February 2026 00:46:56 +0000 (0:00:00.266) 0:01:04.712 **** 2026-02-04 00:47:05.518972 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.518982 | orchestrator | 2026-02-04 00:47:05.518997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.519008 | orchestrator | Wednesday 04 February 2026 00:46:57 +0000 (0:00:00.241) 0:01:04.953 **** 2026-02-04 00:47:05.519019 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519029 | orchestrator | 2026-02-04 00:47:05.519040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.519052 | orchestrator | Wednesday 04 February 2026 00:46:57 +0000 (0:00:00.225) 0:01:05.179 **** 2026-02-04 00:47:05.519063 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519074 | orchestrator | 2026-02-04 00:47:05.519085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.519097 | orchestrator | Wednesday 04 February 2026 00:46:57 +0000 (0:00:00.211) 0:01:05.390 **** 2026-02-04 00:47:05.519109 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-04 00:47:05.519121 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-04 00:47:05.519133 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-04 00:47:05.519145 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-04 00:47:05.519156 | orchestrator | 2026-02-04 00:47:05.519169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.519181 | orchestrator | Wednesday 04 February 2026 00:46:58 +0000 (0:00:00.739) 0:01:06.129 **** 2026-02-04 00:47:05.519193 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519205 | orchestrator | 2026-02-04 00:47:05.519216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.519227 | orchestrator | Wednesday 04 February 2026 00:46:58 +0000 (0:00:00.218) 0:01:06.347 **** 2026-02-04 00:47:05.519238 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519249 | orchestrator | 2026-02-04 00:47:05.519260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.519272 | orchestrator | Wednesday 04 February 2026 00:46:58 +0000 (0:00:00.235) 0:01:06.582 **** 2026-02-04 00:47:05.519283 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519295 | orchestrator | 2026-02-04 00:47:05.519306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:47:05.519318 | orchestrator | Wednesday 04 February 2026 00:46:59 +0000 (0:00:00.237) 0:01:06.820 **** 2026-02-04 00:47:05.519340 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519352 | orchestrator | 2026-02-04 00:47:05.519363 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-04 00:47:05.519374 | orchestrator | Wednesday 04 February 2026 00:46:59 +0000 (0:00:00.241) 0:01:07.062 **** 2026-02-04 00:47:05.519385 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519397 | orchestrator | 2026-02-04 00:47:05.519408 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-04 00:47:05.519419 | orchestrator | Wednesday 04 February 2026 00:46:59 +0000 (0:00:00.472) 0:01:07.535 **** 2026-02-04 00:47:05.519431 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e3daecb5-9fd0-5834-b191-078d341d10dc'}}) 2026-02-04 00:47:05.519443 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '607d890d-3e41-57a1-9874-83b389fa50fb'}}) 2026-02-04 00:47:05.519454 | orchestrator | 2026-02-04 00:47:05.519465 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-04 00:47:05.519477 | orchestrator | Wednesday 04 February 2026 00:47:00 +0000 (0:00:00.223) 0:01:07.758 **** 2026-02-04 00:47:05.519490 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'}) 2026-02-04 00:47:05.519503 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'}) 2026-02-04 00:47:05.519515 | orchestrator | 2026-02-04 00:47:05.519554 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-04 00:47:05.519590 | orchestrator | Wednesday 04 February 2026 00:47:02 +0000 (0:00:02.027) 0:01:09.786 **** 2026-02-04 00:47:05.519604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:05.519618 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:05.519630 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519642 | orchestrator | 2026-02-04 00:47:05.519654 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-04 00:47:05.519666 | orchestrator | Wednesday 04 February 2026 00:47:02 +0000 (0:00:00.192) 0:01:09.979 **** 2026-02-04 00:47:05.519678 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'}) 2026-02-04 00:47:05.519690 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'}) 2026-02-04 00:47:05.519701 | orchestrator | 2026-02-04 00:47:05.519713 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-04 00:47:05.519723 | orchestrator | Wednesday 04 February 2026 00:47:03 +0000 (0:00:01.449) 0:01:11.428 **** 2026-02-04 00:47:05.519734 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:05.519745 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:05.519762 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519774 | orchestrator | 2026-02-04 00:47:05.519786 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-04 00:47:05.519797 | orchestrator | Wednesday 04 February 2026 00:47:03 +0000 (0:00:00.185) 0:01:11.613 **** 2026-02-04 00:47:05.519808 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519820 | orchestrator | 2026-02-04 00:47:05.519830 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-04 00:47:05.519842 | orchestrator | Wednesday 04 February 2026 00:47:04 +0000 (0:00:00.193) 0:01:11.807 **** 2026-02-04 00:47:05.519860 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:05.519870 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:05.519882 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519893 | orchestrator | 2026-02-04 00:47:05.519904 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-04 00:47:05.519915 | orchestrator | Wednesday 04 February 2026 00:47:04 +0000 (0:00:00.171) 0:01:11.978 **** 2026-02-04 00:47:05.519926 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.519937 | orchestrator | 2026-02-04 00:47:05.519947 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-04 00:47:05.519959 | orchestrator | Wednesday 04 February 2026 00:47:04 +0000 (0:00:00.161) 0:01:12.140 **** 2026-02-04 00:47:05.519970 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:05.519981 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:05.519991 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.520002 | orchestrator | 2026-02-04 00:47:05.520012 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-04 00:47:05.520023 | orchestrator | Wednesday 04 February 2026 00:47:04 +0000 (0:00:00.173) 0:01:12.313 **** 2026-02-04 00:47:05.520035 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.520046 | orchestrator | 2026-02-04 00:47:05.520058 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-04 00:47:05.520069 | orchestrator | Wednesday 04 February 2026 00:47:04 +0000 (0:00:00.148) 0:01:12.462 **** 2026-02-04 00:47:05.520081 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:05.520093 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:05.520104 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:05.520116 | orchestrator | 2026-02-04 00:47:05.520128 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-04 00:47:05.520139 | orchestrator | Wednesday 04 February 2026 00:47:04 +0000 (0:00:00.166) 0:01:12.628 **** 2026-02-04 00:47:05.520150 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:05.520162 | orchestrator | 2026-02-04 00:47:05.520173 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-04 00:47:05.520184 | orchestrator | Wednesday 04 February 2026 00:47:05 +0000 (0:00:00.542) 0:01:13.171 **** 2026-02-04 00:47:05.520207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:12.788873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:12.788968 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.788979 | orchestrator | 2026-02-04 00:47:12.788988 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-04 00:47:12.788997 | orchestrator | Wednesday 04 February 2026 00:47:05 +0000 (0:00:00.211) 0:01:13.382 **** 2026-02-04 00:47:12.789005 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:12.789012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:12.789039 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789046 | orchestrator | 2026-02-04 00:47:12.789053 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-04 00:47:12.789060 | orchestrator | Wednesday 04 February 2026 00:47:05 +0000 (0:00:00.169) 0:01:13.552 **** 2026-02-04 00:47:12.789067 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:12.789074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:12.789081 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789087 | orchestrator | 2026-02-04 00:47:12.789094 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-04 00:47:12.789114 | orchestrator | Wednesday 04 February 2026 00:47:05 +0000 (0:00:00.171) 0:01:13.724 **** 2026-02-04 00:47:12.789120 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789127 | orchestrator | 2026-02-04 00:47:12.789132 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-04 00:47:12.789138 | orchestrator | Wednesday 04 February 2026 00:47:06 +0000 (0:00:00.176) 0:01:13.901 **** 2026-02-04 00:47:12.789145 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789151 | orchestrator | 2026-02-04 00:47:12.789158 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-04 00:47:12.789164 | orchestrator | Wednesday 04 February 2026 00:47:06 +0000 (0:00:00.154) 0:01:14.056 **** 2026-02-04 00:47:12.789169 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789175 | orchestrator | 2026-02-04 00:47:12.789181 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-04 00:47:12.789187 | orchestrator | Wednesday 04 February 2026 00:47:06 +0000 (0:00:00.141) 0:01:14.198 **** 2026-02-04 00:47:12.789193 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:47:12.789201 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-04 00:47:12.789207 | orchestrator | } 2026-02-04 00:47:12.789215 | orchestrator | 2026-02-04 00:47:12.789222 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-04 00:47:12.789229 | orchestrator | Wednesday 04 February 2026 00:47:06 +0000 (0:00:00.152) 0:01:14.350 **** 2026-02-04 00:47:12.789235 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:47:12.789242 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-04 00:47:12.789249 | orchestrator | } 2026-02-04 00:47:12.789255 | orchestrator | 2026-02-04 00:47:12.789262 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-04 00:47:12.789268 | orchestrator | Wednesday 04 February 2026 00:47:06 +0000 (0:00:00.149) 0:01:14.499 **** 2026-02-04 00:47:12.789274 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:47:12.789281 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-04 00:47:12.789287 | orchestrator | } 2026-02-04 00:47:12.789293 | orchestrator | 2026-02-04 00:47:12.789299 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-04 00:47:12.789305 | orchestrator | Wednesday 04 February 2026 00:47:06 +0000 (0:00:00.161) 0:01:14.661 **** 2026-02-04 00:47:12.789312 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:12.789318 | orchestrator | 2026-02-04 00:47:12.789324 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-04 00:47:12.789330 | orchestrator | Wednesday 04 February 2026 00:47:07 +0000 (0:00:00.561) 0:01:15.222 **** 2026-02-04 00:47:12.789336 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:12.789342 | orchestrator | 2026-02-04 00:47:12.789349 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-04 00:47:12.789355 | orchestrator | Wednesday 04 February 2026 00:47:08 +0000 (0:00:00.540) 0:01:15.762 **** 2026-02-04 00:47:12.789361 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:12.789375 | orchestrator | 2026-02-04 00:47:12.789382 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-04 00:47:12.789387 | orchestrator | Wednesday 04 February 2026 00:47:08 +0000 (0:00:00.890) 0:01:16.653 **** 2026-02-04 00:47:12.789394 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:12.789400 | orchestrator | 2026-02-04 00:47:12.789407 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-04 00:47:12.789416 | orchestrator | Wednesday 04 February 2026 00:47:09 +0000 (0:00:00.168) 0:01:16.822 **** 2026-02-04 00:47:12.789422 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789432 | orchestrator | 2026-02-04 00:47:12.789440 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-04 00:47:12.789449 | orchestrator | Wednesday 04 February 2026 00:47:09 +0000 (0:00:00.139) 0:01:16.962 **** 2026-02-04 00:47:12.789457 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789464 | orchestrator | 2026-02-04 00:47:12.789471 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-04 00:47:12.789478 | orchestrator | Wednesday 04 February 2026 00:47:09 +0000 (0:00:00.132) 0:01:17.094 **** 2026-02-04 00:47:12.789486 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:47:12.789493 | orchestrator |  "vgs_report": { 2026-02-04 00:47:12.789502 | orchestrator |  "vg": [] 2026-02-04 00:47:12.789576 | orchestrator |  } 2026-02-04 00:47:12.789586 | orchestrator | } 2026-02-04 00:47:12.789594 | orchestrator | 2026-02-04 00:47:12.789602 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-04 00:47:12.789612 | orchestrator | Wednesday 04 February 2026 00:47:09 +0000 (0:00:00.162) 0:01:17.257 **** 2026-02-04 00:47:12.789619 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789628 | orchestrator | 2026-02-04 00:47:12.789637 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-04 00:47:12.789644 | orchestrator | Wednesday 04 February 2026 00:47:09 +0000 (0:00:00.166) 0:01:17.423 **** 2026-02-04 00:47:12.789650 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789657 | orchestrator | 2026-02-04 00:47:12.789664 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-04 00:47:12.789670 | orchestrator | Wednesday 04 February 2026 00:47:09 +0000 (0:00:00.163) 0:01:17.587 **** 2026-02-04 00:47:12.789677 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789684 | orchestrator | 2026-02-04 00:47:12.789692 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-04 00:47:12.789699 | orchestrator | Wednesday 04 February 2026 00:47:09 +0000 (0:00:00.150) 0:01:17.738 **** 2026-02-04 00:47:12.789705 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789714 | orchestrator | 2026-02-04 00:47:12.789722 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-04 00:47:12.789728 | orchestrator | Wednesday 04 February 2026 00:47:10 +0000 (0:00:00.186) 0:01:17.924 **** 2026-02-04 00:47:12.789735 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789742 | orchestrator | 2026-02-04 00:47:12.789750 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-04 00:47:12.789757 | orchestrator | Wednesday 04 February 2026 00:47:10 +0000 (0:00:00.164) 0:01:18.089 **** 2026-02-04 00:47:12.789766 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789773 | orchestrator | 2026-02-04 00:47:12.789781 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-04 00:47:12.789789 | orchestrator | Wednesday 04 February 2026 00:47:10 +0000 (0:00:00.172) 0:01:18.261 **** 2026-02-04 00:47:12.789797 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789805 | orchestrator | 2026-02-04 00:47:12.789815 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-04 00:47:12.789823 | orchestrator | Wednesday 04 February 2026 00:47:10 +0000 (0:00:00.164) 0:01:18.426 **** 2026-02-04 00:47:12.789830 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789837 | orchestrator | 2026-02-04 00:47:12.789843 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-04 00:47:12.789857 | orchestrator | Wednesday 04 February 2026 00:47:11 +0000 (0:00:00.553) 0:01:18.979 **** 2026-02-04 00:47:12.789863 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789870 | orchestrator | 2026-02-04 00:47:12.789877 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-04 00:47:12.789883 | orchestrator | Wednesday 04 February 2026 00:47:11 +0000 (0:00:00.215) 0:01:19.194 **** 2026-02-04 00:47:12.789890 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789897 | orchestrator | 2026-02-04 00:47:12.789903 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-04 00:47:12.789910 | orchestrator | Wednesday 04 February 2026 00:47:11 +0000 (0:00:00.187) 0:01:19.381 **** 2026-02-04 00:47:12.789916 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789923 | orchestrator | 2026-02-04 00:47:12.789929 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-04 00:47:12.789936 | orchestrator | Wednesday 04 February 2026 00:47:11 +0000 (0:00:00.148) 0:01:19.530 **** 2026-02-04 00:47:12.789943 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789949 | orchestrator | 2026-02-04 00:47:12.789956 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-04 00:47:12.789962 | orchestrator | Wednesday 04 February 2026 00:47:11 +0000 (0:00:00.173) 0:01:19.703 **** 2026-02-04 00:47:12.789968 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.789975 | orchestrator | 2026-02-04 00:47:12.789981 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-04 00:47:12.789988 | orchestrator | Wednesday 04 February 2026 00:47:12 +0000 (0:00:00.159) 0:01:19.863 **** 2026-02-04 00:47:12.789995 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.790001 | orchestrator | 2026-02-04 00:47:12.790007 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-04 00:47:12.790013 | orchestrator | Wednesday 04 February 2026 00:47:12 +0000 (0:00:00.161) 0:01:20.025 **** 2026-02-04 00:47:12.790066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:12.790074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:12.790081 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.790087 | orchestrator | 2026-02-04 00:47:12.790094 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-04 00:47:12.790101 | orchestrator | Wednesday 04 February 2026 00:47:12 +0000 (0:00:00.226) 0:01:20.251 **** 2026-02-04 00:47:12.790107 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:12.790114 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:12.790121 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:12.790127 | orchestrator | 2026-02-04 00:47:12.790134 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-04 00:47:12.790141 | orchestrator | Wednesday 04 February 2026 00:47:12 +0000 (0:00:00.214) 0:01:20.465 **** 2026-02-04 00:47:12.790158 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:16.482959 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:16.483042 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:16.483054 | orchestrator | 2026-02-04 00:47:16.483060 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-04 00:47:16.483066 | orchestrator | Wednesday 04 February 2026 00:47:12 +0000 (0:00:00.168) 0:01:20.634 **** 2026-02-04 00:47:16.483102 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:16.483107 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:16.483111 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:16.483115 | orchestrator | 2026-02-04 00:47:16.483119 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-04 00:47:16.483123 | orchestrator | Wednesday 04 February 2026 00:47:13 +0000 (0:00:00.199) 0:01:20.833 **** 2026-02-04 00:47:16.483127 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:16.483135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:16.483139 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:16.483142 | orchestrator | 2026-02-04 00:47:16.483146 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-04 00:47:16.483150 | orchestrator | Wednesday 04 February 2026 00:47:13 +0000 (0:00:00.205) 0:01:21.039 **** 2026-02-04 00:47:16.483154 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:16.483158 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:16.483161 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:16.483166 | orchestrator | 2026-02-04 00:47:16.483169 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-04 00:47:16.483173 | orchestrator | Wednesday 04 February 2026 00:47:13 +0000 (0:00:00.531) 0:01:21.571 **** 2026-02-04 00:47:16.483177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:16.483181 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:16.483185 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:16.483189 | orchestrator | 2026-02-04 00:47:16.483192 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-04 00:47:16.483196 | orchestrator | Wednesday 04 February 2026 00:47:14 +0000 (0:00:00.189) 0:01:21.760 **** 2026-02-04 00:47:16.483200 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:16.483204 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:16.483208 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:16.483211 | orchestrator | 2026-02-04 00:47:16.483215 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-04 00:47:16.483219 | orchestrator | Wednesday 04 February 2026 00:47:14 +0000 (0:00:00.185) 0:01:21.946 **** 2026-02-04 00:47:16.483223 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:16.483228 | orchestrator | 2026-02-04 00:47:16.483231 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-04 00:47:16.483235 | orchestrator | Wednesday 04 February 2026 00:47:14 +0000 (0:00:00.589) 0:01:22.535 **** 2026-02-04 00:47:16.483239 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:16.483243 | orchestrator | 2026-02-04 00:47:16.483246 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-04 00:47:16.483256 | orchestrator | Wednesday 04 February 2026 00:47:15 +0000 (0:00:00.581) 0:01:23.117 **** 2026-02-04 00:47:16.483260 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:16.483263 | orchestrator | 2026-02-04 00:47:16.483267 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-04 00:47:16.483271 | orchestrator | Wednesday 04 February 2026 00:47:15 +0000 (0:00:00.161) 0:01:23.279 **** 2026-02-04 00:47:16.483275 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'vg_name': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'}) 2026-02-04 00:47:16.483280 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'vg_name': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'}) 2026-02-04 00:47:16.483284 | orchestrator | 2026-02-04 00:47:16.483287 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-04 00:47:16.483291 | orchestrator | Wednesday 04 February 2026 00:47:15 +0000 (0:00:00.185) 0:01:23.464 **** 2026-02-04 00:47:16.483306 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:16.483310 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:16.483314 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:16.483317 | orchestrator | 2026-02-04 00:47:16.483321 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-04 00:47:16.483325 | orchestrator | Wednesday 04 February 2026 00:47:15 +0000 (0:00:00.185) 0:01:23.650 **** 2026-02-04 00:47:16.483329 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:16.483333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:16.483336 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:16.483340 | orchestrator | 2026-02-04 00:47:16.483344 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-04 00:47:16.483348 | orchestrator | Wednesday 04 February 2026 00:47:16 +0000 (0:00:00.191) 0:01:23.841 **** 2026-02-04 00:47:16.483352 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'})  2026-02-04 00:47:16.483358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'})  2026-02-04 00:47:16.483362 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:16.483366 | orchestrator | 2026-02-04 00:47:16.483370 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-04 00:47:16.483374 | orchestrator | Wednesday 04 February 2026 00:47:16 +0000 (0:00:00.205) 0:01:24.047 **** 2026-02-04 00:47:16.483378 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:47:16.483382 | orchestrator |  "lvm_report": { 2026-02-04 00:47:16.483386 | orchestrator |  "lv": [ 2026-02-04 00:47:16.483390 | orchestrator |  { 2026-02-04 00:47:16.483394 | orchestrator |  "lv_name": "osd-block-607d890d-3e41-57a1-9874-83b389fa50fb", 2026-02-04 00:47:16.483398 | orchestrator |  "vg_name": "ceph-607d890d-3e41-57a1-9874-83b389fa50fb" 2026-02-04 00:47:16.483402 | orchestrator |  }, 2026-02-04 00:47:16.483406 | orchestrator |  { 2026-02-04 00:47:16.483410 | orchestrator |  "lv_name": "osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc", 2026-02-04 00:47:16.483414 | orchestrator |  "vg_name": "ceph-e3daecb5-9fd0-5834-b191-078d341d10dc" 2026-02-04 00:47:16.483417 | orchestrator |  } 2026-02-04 00:47:16.483421 | orchestrator |  ], 2026-02-04 00:47:16.483425 | orchestrator |  "pv": [ 2026-02-04 00:47:16.483432 | orchestrator |  { 2026-02-04 00:47:16.483436 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-04 00:47:16.483440 | orchestrator |  "vg_name": "ceph-e3daecb5-9fd0-5834-b191-078d341d10dc" 2026-02-04 00:47:16.483444 | orchestrator |  }, 2026-02-04 00:47:16.483448 | orchestrator |  { 2026-02-04 00:47:16.483451 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-04 00:47:16.483455 | orchestrator |  "vg_name": "ceph-607d890d-3e41-57a1-9874-83b389fa50fb" 2026-02-04 00:47:16.483459 | orchestrator |  } 2026-02-04 00:47:16.483463 | orchestrator |  ] 2026-02-04 00:47:16.483467 | orchestrator |  } 2026-02-04 00:47:16.483471 | orchestrator | } 2026-02-04 00:47:16.483475 | orchestrator | 2026-02-04 00:47:16.483479 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:47:16.483483 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-04 00:47:16.483487 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-04 00:47:16.483490 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-04 00:47:16.483494 | orchestrator | 2026-02-04 00:47:16.483498 | orchestrator | 2026-02-04 00:47:16.483502 | orchestrator | 2026-02-04 00:47:16.483506 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:47:16.483509 | orchestrator | Wednesday 04 February 2026 00:47:16 +0000 (0:00:00.170) 0:01:24.218 **** 2026-02-04 00:47:16.483513 | orchestrator | =============================================================================== 2026-02-04 00:47:16.483538 | orchestrator | Create block VGs -------------------------------------------------------- 5.72s 2026-02-04 00:47:16.483543 | orchestrator | Create block LVs -------------------------------------------------------- 4.18s 2026-02-04 00:47:16.483548 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 2.03s 2026-02-04 00:47:16.483552 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.98s 2026-02-04 00:47:16.483556 | orchestrator | Add known partitions to the list of available block devices ------------- 1.83s 2026-02-04 00:47:16.483561 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.74s 2026-02-04 00:47:16.483566 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.73s 2026-02-04 00:47:16.483570 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.71s 2026-02-04 00:47:16.483578 | orchestrator | Add known links to the list of available block devices ------------------ 1.56s 2026-02-04 00:47:17.215930 | orchestrator | Add known partitions to the list of available block devices ------------- 1.27s 2026-02-04 00:47:17.216017 | orchestrator | Print LVM report data --------------------------------------------------- 1.22s 2026-02-04 00:47:17.216027 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.01s 2026-02-04 00:47:17.216034 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-02-04 00:47:17.216041 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2026-02-04 00:47:17.216048 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2026-02-04 00:47:17.216054 | orchestrator | Print number of OSDs wanted per DB+WAL VG ------------------------------- 0.94s 2026-02-04 00:47:17.216062 | orchestrator | Calculate size needed for WAL LVs on ceph_db_wal_devices ---------------- 0.93s 2026-02-04 00:47:17.216069 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2026-02-04 00:47:17.216076 | orchestrator | Get initial list of available block devices ----------------------------- 0.90s 2026-02-04 00:47:17.216082 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.90s 2026-02-04 00:47:30.197718 | orchestrator | 2026-02-04 00:47:30 | INFO  | Prepare task for execution of facts. 2026-02-04 00:47:30.290373 | orchestrator | 2026-02-04 00:47:30 | INFO  | Task fdda32a1-79cd-4fa6-8973-1d5f7dd2daca (facts) was prepared for execution. 2026-02-04 00:47:30.290471 | orchestrator | 2026-02-04 00:47:30 | INFO  | It takes a moment until task fdda32a1-79cd-4fa6-8973-1d5f7dd2daca (facts) has been started and output is visible here. 2026-02-04 00:47:45.756379 | orchestrator | 2026-02-04 00:47:45.756501 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-04 00:47:45.756552 | orchestrator | 2026-02-04 00:47:45.756562 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-04 00:47:45.756571 | orchestrator | Wednesday 04 February 2026 00:47:36 +0000 (0:00:00.332) 0:00:00.332 **** 2026-02-04 00:47:45.756579 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:47:45.756590 | orchestrator | ok: [testbed-manager] 2026-02-04 00:47:45.756598 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:47:45.756607 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:47:45.756616 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:47:45.756624 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:47:45.756632 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:45.756641 | orchestrator | 2026-02-04 00:47:45.756649 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-04 00:47:45.756657 | orchestrator | Wednesday 04 February 2026 00:47:37 +0000 (0:00:01.352) 0:00:01.685 **** 2026-02-04 00:47:45.756667 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:47:45.756682 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:47:45.756696 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:47:45.756710 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:47:45.756724 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:47:45.756739 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:47:45.756751 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:45.756760 | orchestrator | 2026-02-04 00:47:45.756768 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 00:47:45.756776 | orchestrator | 2026-02-04 00:47:45.756785 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:47:45.756793 | orchestrator | Wednesday 04 February 2026 00:47:38 +0000 (0:00:01.524) 0:00:03.210 **** 2026-02-04 00:47:45.756801 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:47:45.756809 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:47:45.756817 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:47:45.756825 | orchestrator | ok: [testbed-manager] 2026-02-04 00:47:45.756833 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:47:45.756842 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:47:45.756851 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:45.756864 | orchestrator | 2026-02-04 00:47:45.756877 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-04 00:47:45.756890 | orchestrator | 2026-02-04 00:47:45.756904 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-04 00:47:45.756916 | orchestrator | Wednesday 04 February 2026 00:47:44 +0000 (0:00:05.439) 0:00:08.650 **** 2026-02-04 00:47:45.756927 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:47:45.756938 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:47:45.756950 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:47:45.756963 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:47:45.756978 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:47:45.756992 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:47:45.757006 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:45.757021 | orchestrator | 2026-02-04 00:47:45.757036 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:47:45.757052 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:47:45.757067 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:47:45.757108 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:47:45.757124 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:47:45.757138 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:47:45.757152 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:47:45.757166 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:47:45.757176 | orchestrator | 2026-02-04 00:47:45.757184 | orchestrator | 2026-02-04 00:47:45.757197 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:47:45.757211 | orchestrator | Wednesday 04 February 2026 00:47:45 +0000 (0:00:00.741) 0:00:09.391 **** 2026-02-04 00:47:45.757225 | orchestrator | =============================================================================== 2026-02-04 00:47:45.757238 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.44s 2026-02-04 00:47:45.757253 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.52s 2026-02-04 00:47:45.757266 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.35s 2026-02-04 00:47:45.757279 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.74s 2026-02-04 00:47:58.938945 | orchestrator | 2026-02-04 00:47:58 | INFO  | Prepare task for execution of frr. 2026-02-04 00:47:59.028795 | orchestrator | 2026-02-04 00:47:59 | INFO  | Task 87b244d7-46df-4b01-b0b9-975562311db3 (frr) was prepared for execution. 2026-02-04 00:47:59.028863 | orchestrator | 2026-02-04 00:47:59 | INFO  | It takes a moment until task 87b244d7-46df-4b01-b0b9-975562311db3 (frr) has been started and output is visible here. 2026-02-04 00:48:31.399727 | orchestrator | 2026-02-04 00:48:31.399797 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-04 00:48:31.399810 | orchestrator | 2026-02-04 00:48:31.399818 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-04 00:48:31.399826 | orchestrator | Wednesday 04 February 2026 00:48:03 +0000 (0:00:00.273) 0:00:00.273 **** 2026-02-04 00:48:31.399834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:48:31.399842 | orchestrator | 2026-02-04 00:48:31.399849 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-04 00:48:31.399857 | orchestrator | Wednesday 04 February 2026 00:48:04 +0000 (0:00:00.277) 0:00:00.551 **** 2026-02-04 00:48:31.399865 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:31.399875 | orchestrator | 2026-02-04 00:48:31.399882 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-04 00:48:31.399890 | orchestrator | Wednesday 04 February 2026 00:48:05 +0000 (0:00:01.399) 0:00:01.951 **** 2026-02-04 00:48:31.399898 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:31.399905 | orchestrator | 2026-02-04 00:48:31.399914 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-04 00:48:31.399922 | orchestrator | Wednesday 04 February 2026 00:48:18 +0000 (0:00:13.037) 0:00:14.988 **** 2026-02-04 00:48:31.399930 | orchestrator | ok: [testbed-manager] 2026-02-04 00:48:31.399938 | orchestrator | 2026-02-04 00:48:31.399946 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-04 00:48:31.399954 | orchestrator | Wednesday 04 February 2026 00:48:19 +0000 (0:00:01.216) 0:00:16.205 **** 2026-02-04 00:48:31.399962 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:31.399983 | orchestrator | 2026-02-04 00:48:31.399991 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-04 00:48:31.399999 | orchestrator | Wednesday 04 February 2026 00:48:20 +0000 (0:00:01.108) 0:00:17.313 **** 2026-02-04 00:48:31.400007 | orchestrator | ok: [testbed-manager] 2026-02-04 00:48:31.400014 | orchestrator | 2026-02-04 00:48:31.400022 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-04 00:48:31.400030 | orchestrator | Wednesday 04 February 2026 00:48:22 +0000 (0:00:01.326) 0:00:18.639 **** 2026-02-04 00:48:31.400038 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:48:31.400046 | orchestrator | 2026-02-04 00:48:31.400054 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-04 00:48:31.400062 | orchestrator | Wednesday 04 February 2026 00:48:22 +0000 (0:00:00.173) 0:00:18.813 **** 2026-02-04 00:48:31.400070 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:48:31.400077 | orchestrator | 2026-02-04 00:48:31.400085 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-04 00:48:31.400092 | orchestrator | Wednesday 04 February 2026 00:48:22 +0000 (0:00:00.169) 0:00:18.982 **** 2026-02-04 00:48:31.400100 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:31.400107 | orchestrator | 2026-02-04 00:48:31.400115 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-04 00:48:31.400123 | orchestrator | Wednesday 04 February 2026 00:48:23 +0000 (0:00:01.088) 0:00:20.070 **** 2026-02-04 00:48:31.400131 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-04 00:48:31.400139 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-04 00:48:31.400148 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-04 00:48:31.400155 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-04 00:48:31.400163 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-04 00:48:31.400170 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-04 00:48:31.400178 | orchestrator | 2026-02-04 00:48:31.400185 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-04 00:48:31.400208 | orchestrator | Wednesday 04 February 2026 00:48:27 +0000 (0:00:03.795) 0:00:23.866 **** 2026-02-04 00:48:31.400217 | orchestrator | ok: [testbed-manager] 2026-02-04 00:48:31.400225 | orchestrator | 2026-02-04 00:48:31.400232 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-04 00:48:31.400240 | orchestrator | Wednesday 04 February 2026 00:48:29 +0000 (0:00:01.849) 0:00:25.716 **** 2026-02-04 00:48:31.400247 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:31.400255 | orchestrator | 2026-02-04 00:48:31.400262 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:48:31.400270 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:48:31.400278 | orchestrator | 2026-02-04 00:48:31.400285 | orchestrator | 2026-02-04 00:48:31.400294 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:48:31.400302 | orchestrator | Wednesday 04 February 2026 00:48:30 +0000 (0:00:01.593) 0:00:27.310 **** 2026-02-04 00:48:31.400309 | orchestrator | =============================================================================== 2026-02-04 00:48:31.400317 | orchestrator | osism.services.frr : Install frr package ------------------------------- 13.04s 2026-02-04 00:48:31.400325 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.80s 2026-02-04 00:48:31.400340 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.85s 2026-02-04 00:48:31.400348 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.59s 2026-02-04 00:48:31.400361 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.40s 2026-02-04 00:48:31.400382 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.33s 2026-02-04 00:48:31.400390 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.22s 2026-02-04 00:48:31.400398 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.11s 2026-02-04 00:48:31.400407 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.09s 2026-02-04 00:48:31.400414 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.28s 2026-02-04 00:48:31.400422 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.17s 2026-02-04 00:48:31.400428 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.17s 2026-02-04 00:48:31.815721 | orchestrator | 2026-02-04 00:48:31.818881 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Feb 4 00:48:31 UTC 2026 2026-02-04 00:48:31.818926 | orchestrator | 2026-02-04 00:48:34.071292 | orchestrator | 2026-02-04 00:48:34 | INFO  | Collection nutshell is prepared for execution 2026-02-04 00:48:34.071381 | orchestrator | 2026-02-04 00:48:34 | INFO  | A [0] - dotfiles 2026-02-04 00:48:44.084236 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [0] - homer 2026-02-04 00:48:44.084325 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [0] - netdata 2026-02-04 00:48:44.084334 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [0] - openstackclient 2026-02-04 00:48:44.084339 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [0] - phpmyadmin 2026-02-04 00:48:44.084343 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [0] - common 2026-02-04 00:48:44.089938 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [1] -- loadbalancer 2026-02-04 00:48:44.090002 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [2] --- opensearch 2026-02-04 00:48:44.090007 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [2] --- mariadb-ng 2026-02-04 00:48:44.090482 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [3] ---- horizon 2026-02-04 00:48:44.090660 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [3] ---- keystone 2026-02-04 00:48:44.090835 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [4] ----- neutron 2026-02-04 00:48:44.091228 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [5] ------ wait-for-nova 2026-02-04 00:48:44.091533 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [6] ------- octavia 2026-02-04 00:48:44.093559 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [4] ----- barbican 2026-02-04 00:48:44.093610 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [4] ----- designate 2026-02-04 00:48:44.093619 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [4] ----- ironic 2026-02-04 00:48:44.093867 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [4] ----- placement 2026-02-04 00:48:44.093928 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [4] ----- magnum 2026-02-04 00:48:44.095480 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [1] -- openvswitch 2026-02-04 00:48:44.095561 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [2] --- ovn 2026-02-04 00:48:44.095572 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [1] -- memcached 2026-02-04 00:48:44.095580 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [1] -- redis 2026-02-04 00:48:44.095587 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [1] -- rabbitmq-ng 2026-02-04 00:48:44.096031 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [0] - kubernetes 2026-02-04 00:48:44.099308 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [1] -- kubeconfig 2026-02-04 00:48:44.099368 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [1] -- copy-kubeconfig 2026-02-04 00:48:44.099406 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [0] - ceph 2026-02-04 00:48:44.102300 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [1] -- ceph-pools 2026-02-04 00:48:44.102477 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [2] --- copy-ceph-keys 2026-02-04 00:48:44.102490 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [3] ---- cephclient 2026-02-04 00:48:44.102516 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-02-04 00:48:44.102523 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [4] ----- wait-for-keystone 2026-02-04 00:48:44.102530 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [5] ------ kolla-ceph-rgw 2026-02-04 00:48:44.102536 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [5] ------ glance 2026-02-04 00:48:44.102551 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [5] ------ cinder 2026-02-04 00:48:44.102558 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [5] ------ nova 2026-02-04 00:48:44.102721 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [4] ----- prometheus 2026-02-04 00:48:44.102807 | orchestrator | 2026-02-04 00:48:44 | INFO  | A [5] ------ grafana 2026-02-04 00:48:44.393340 | orchestrator | 2026-02-04 00:48:44 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-02-04 00:48:44.393415 | orchestrator | 2026-02-04 00:48:44 | INFO  | Tasks are running in the background 2026-02-04 00:48:48.184786 | orchestrator | 2026-02-04 00:48:48 | INFO  | No task IDs specified, wait for all currently running tasks 2026-02-04 00:48:50.353280 | orchestrator | 2026-02-04 00:48:50 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:48:50.354281 | orchestrator | 2026-02-04 00:48:50 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:48:50.357380 | orchestrator | 2026-02-04 00:48:50 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:48:50.359679 | orchestrator | 2026-02-04 00:48:50 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:48:50.359919 | orchestrator | 2026-02-04 00:48:50 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:48:50.367602 | orchestrator | 2026-02-04 00:48:50 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:48:50.367659 | orchestrator | 2026-02-04 00:48:50 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:48:50.367670 | orchestrator | 2026-02-04 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:53.426308 | orchestrator | 2026-02-04 00:48:53 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:48:53.426972 | orchestrator | 2026-02-04 00:48:53 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:48:53.432792 | orchestrator | 2026-02-04 00:48:53 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:48:53.436598 | orchestrator | 2026-02-04 00:48:53 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:48:53.437118 | orchestrator | 2026-02-04 00:48:53 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:48:53.438162 | orchestrator | 2026-02-04 00:48:53 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:48:53.447638 | orchestrator | 2026-02-04 00:48:53 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:48:53.447704 | orchestrator | 2026-02-04 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:56.593450 | orchestrator | 2026-02-04 00:48:56 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:48:56.594228 | orchestrator | 2026-02-04 00:48:56 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:48:56.597723 | orchestrator | 2026-02-04 00:48:56 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:48:56.599787 | orchestrator | 2026-02-04 00:48:56 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:48:56.604299 | orchestrator | 2026-02-04 00:48:56 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:48:56.608551 | orchestrator | 2026-02-04 00:48:56 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:48:56.608877 | orchestrator | 2026-02-04 00:48:56 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:48:56.609003 | orchestrator | 2026-02-04 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:59.667765 | orchestrator | 2026-02-04 00:48:59 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:48:59.668766 | orchestrator | 2026-02-04 00:48:59 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:48:59.671621 | orchestrator | 2026-02-04 00:48:59 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:48:59.678239 | orchestrator | 2026-02-04 00:48:59 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:48:59.684028 | orchestrator | 2026-02-04 00:48:59 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:48:59.685590 | orchestrator | 2026-02-04 00:48:59 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:48:59.687976 | orchestrator | 2026-02-04 00:48:59 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:48:59.688029 | orchestrator | 2026-02-04 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:02.762334 | orchestrator | 2026-02-04 00:49:02 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:02.762392 | orchestrator | 2026-02-04 00:49:02 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:02.762413 | orchestrator | 2026-02-04 00:49:02 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:49:02.764345 | orchestrator | 2026-02-04 00:49:02 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:02.765013 | orchestrator | 2026-02-04 00:49:02 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:02.770417 | orchestrator | 2026-02-04 00:49:02 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:02.771322 | orchestrator | 2026-02-04 00:49:02 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:02.771395 | orchestrator | 2026-02-04 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:05.886956 | orchestrator | 2026-02-04 00:49:05 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:05.887950 | orchestrator | 2026-02-04 00:49:05 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:05.906425 | orchestrator | 2026-02-04 00:49:05 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:49:05.906535 | orchestrator | 2026-02-04 00:49:05 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:05.906578 | orchestrator | 2026-02-04 00:49:05 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:05.906587 | orchestrator | 2026-02-04 00:49:05 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:05.906594 | orchestrator | 2026-02-04 00:49:05 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:05.906601 | orchestrator | 2026-02-04 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:08.993579 | orchestrator | 2026-02-04 00:49:08 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:08.993629 | orchestrator | 2026-02-04 00:49:08 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:08.997320 | orchestrator | 2026-02-04 00:49:08 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:49:09.000868 | orchestrator | 2026-02-04 00:49:09 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:09.002213 | orchestrator | 2026-02-04 00:49:09 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:09.003444 | orchestrator | 2026-02-04 00:49:09 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:09.022246 | orchestrator | 2026-02-04 00:49:09 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:09.022304 | orchestrator | 2026-02-04 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:12.196030 | orchestrator | 2026-02-04 00:49:12 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:12.198461 | orchestrator | 2026-02-04 00:49:12 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:12.203192 | orchestrator | 2026-02-04 00:49:12 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:49:12.206700 | orchestrator | 2026-02-04 00:49:12 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:12.210887 | orchestrator | 2026-02-04 00:49:12 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:12.212053 | orchestrator | 2026-02-04 00:49:12 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:12.213232 | orchestrator | 2026-02-04 00:49:12 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:12.213275 | orchestrator | 2026-02-04 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:15.389577 | orchestrator | 2026-02-04 00:49:15 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:15.396433 | orchestrator | 2026-02-04 00:49:15 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:15.402564 | orchestrator | 2026-02-04 00:49:15 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:49:15.409525 | orchestrator | 2026-02-04 00:49:15 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:15.413805 | orchestrator | 2026-02-04 00:49:15 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:15.414707 | orchestrator | 2026-02-04 00:49:15 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:15.418993 | orchestrator | 2026-02-04 00:49:15 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:15.419120 | orchestrator | 2026-02-04 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:18.577010 | orchestrator | 2026-02-04 00:49:18 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:18.577082 | orchestrator | 2026-02-04 00:49:18 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:18.577087 | orchestrator | 2026-02-04 00:49:18 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:49:18.577092 | orchestrator | 2026-02-04 00:49:18 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:18.577096 | orchestrator | 2026-02-04 00:49:18 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:18.577100 | orchestrator | 2026-02-04 00:49:18 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:18.577104 | orchestrator | 2026-02-04 00:49:18 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:18.577109 | orchestrator | 2026-02-04 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:22.047910 | orchestrator | 2026-02-04 00:49:21 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:22.048061 | orchestrator | 2026-02-04 00:49:21 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:22.048081 | orchestrator | 2026-02-04 00:49:21 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:49:22.048092 | orchestrator | 2026-02-04 00:49:21 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:22.048103 | orchestrator | 2026-02-04 00:49:21 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:22.048113 | orchestrator | 2026-02-04 00:49:21 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:22.048123 | orchestrator | 2026-02-04 00:49:21 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:22.048134 | orchestrator | 2026-02-04 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:25.019683 | orchestrator | 2026-02-04 00:49:25 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:25.019880 | orchestrator | 2026-02-04 00:49:25 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:25.022808 | orchestrator | 2026-02-04 00:49:25 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state STARTED 2026-02-04 00:49:25.022859 | orchestrator | 2026-02-04 00:49:25 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:25.022873 | orchestrator | 2026-02-04 00:49:25 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:25.024596 | orchestrator | 2026-02-04 00:49:25 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:25.024631 | orchestrator | 2026-02-04 00:49:25 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:25.024642 | orchestrator | 2026-02-04 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:28.288125 | orchestrator | 2026-02-04 00:49:28 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:28.336759 | orchestrator | 2026-02-04 00:49:28.336852 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-02-04 00:49:28.336866 | orchestrator | 2026-02-04 00:49:28.336877 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-02-04 00:49:28.336888 | orchestrator | Wednesday 04 February 2026 00:49:04 +0000 (0:00:01.148) 0:00:01.148 **** 2026-02-04 00:49:28.336899 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:28.336938 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:49:28.336949 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:49:28.336960 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:28.336970 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:28.336980 | orchestrator | changed: [testbed-manager] 2026-02-04 00:49:28.336989 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:49:28.336999 | orchestrator | 2026-02-04 00:49:28.337010 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-02-04 00:49:28.337020 | orchestrator | Wednesday 04 February 2026 00:49:09 +0000 (0:00:04.506) 0:00:05.654 **** 2026-02-04 00:49:28.337030 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-04 00:49:28.337041 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-04 00:49:28.337050 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-04 00:49:28.337060 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-04 00:49:28.337077 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-04 00:49:28.337087 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-04 00:49:28.337097 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-04 00:49:28.337106 | orchestrator | 2026-02-04 00:49:28.337116 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-02-04 00:49:28.337126 | orchestrator | Wednesday 04 February 2026 00:49:11 +0000 (0:00:02.140) 0:00:07.795 **** 2026-02-04 00:49:28.337140 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:49:10.155772', 'end': '2026-02-04 00:49:10.165637', 'delta': '0:00:00.009865', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:49:28.337158 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:49:10.193853', 'end': '2026-02-04 00:49:10.199257', 'delta': '0:00:00.005404', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:49:28.337170 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:49:10.431218', 'end': '2026-02-04 00:49:10.435629', 'delta': '0:00:00.004411', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:49:28.337211 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:49:10.861489', 'end': '2026-02-04 00:49:10.868027', 'delta': '0:00:00.006538', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:49:28.337232 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:49:10.937362', 'end': '2026-02-04 00:49:10.942607', 'delta': '0:00:00.005245', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:49:28.337243 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:49:11.005437', 'end': '2026-02-04 00:49:11.009966', 'delta': '0:00:00.004529', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:49:28.337577 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:49:10.921546', 'end': '2026-02-04 00:49:10.928024', 'delta': '0:00:00.006478', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:49:28.337596 | orchestrator | 2026-02-04 00:49:28.337608 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-02-04 00:49:28.337620 | orchestrator | Wednesday 04 February 2026 00:49:16 +0000 (0:00:05.093) 0:00:12.889 **** 2026-02-04 00:49:28.337632 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-04 00:49:28.337644 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-04 00:49:28.337656 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-04 00:49:28.337668 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-04 00:49:28.337679 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-04 00:49:28.337691 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-04 00:49:28.337708 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-04 00:49:28.337718 | orchestrator | 2026-02-04 00:49:28.337728 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-02-04 00:49:28.337742 | orchestrator | Wednesday 04 February 2026 00:49:20 +0000 (0:00:04.096) 0:00:16.985 **** 2026-02-04 00:49:28.337752 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-02-04 00:49:28.337762 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-02-04 00:49:28.337772 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-02-04 00:49:28.337782 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-02-04 00:49:28.337791 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-02-04 00:49:28.337801 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-02-04 00:49:28.337811 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-02-04 00:49:28.337820 | orchestrator | 2026-02-04 00:49:28.337830 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:49:28.337850 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:28.337861 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:28.337872 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:28.337882 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:28.337891 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:28.337902 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:28.337911 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:28.337921 | orchestrator | 2026-02-04 00:49:28.337931 | orchestrator | 2026-02-04 00:49:28.337941 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:49:28.337951 | orchestrator | Wednesday 04 February 2026 00:49:23 +0000 (0:00:03.334) 0:00:20.320 **** 2026-02-04 00:49:28.337961 | orchestrator | =============================================================================== 2026-02-04 00:49:28.337971 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 5.09s 2026-02-04 00:49:28.337981 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.51s 2026-02-04 00:49:28.337991 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 4.10s 2026-02-04 00:49:28.338001 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.33s 2026-02-04 00:49:28.338011 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.14s 2026-02-04 00:49:28.338091 | orchestrator | 2026-02-04 00:49:28 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:28.338102 | orchestrator | 2026-02-04 00:49:28 | INFO  | Task b7c9eb13-f152-4a35-86c7-aa99e0e421e1 is in state SUCCESS 2026-02-04 00:49:28.338112 | orchestrator | 2026-02-04 00:49:28 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:28.341374 | orchestrator | 2026-02-04 00:49:28 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:28.341425 | orchestrator | 2026-02-04 00:49:28 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:28.344160 | orchestrator | 2026-02-04 00:49:28 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:49:28.366639 | orchestrator | 2026-02-04 00:49:28 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:28.367170 | orchestrator | 2026-02-04 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:31.537805 | orchestrator | 2026-02-04 00:49:31 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:31.540011 | orchestrator | 2026-02-04 00:49:31 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:31.544426 | orchestrator | 2026-02-04 00:49:31 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:31.545427 | orchestrator | 2026-02-04 00:49:31 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:31.548970 | orchestrator | 2026-02-04 00:49:31 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:31.551241 | orchestrator | 2026-02-04 00:49:31 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:49:31.555461 | orchestrator | 2026-02-04 00:49:31 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:31.555556 | orchestrator | 2026-02-04 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:34.705383 | orchestrator | 2026-02-04 00:49:34 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:34.705436 | orchestrator | 2026-02-04 00:49:34 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:34.707310 | orchestrator | 2026-02-04 00:49:34 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:34.711817 | orchestrator | 2026-02-04 00:49:34 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:34.715148 | orchestrator | 2026-02-04 00:49:34 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:34.721353 | orchestrator | 2026-02-04 00:49:34 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:49:34.726080 | orchestrator | 2026-02-04 00:49:34 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:34.726151 | orchestrator | 2026-02-04 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:37.853389 | orchestrator | 2026-02-04 00:49:37 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:37.856200 | orchestrator | 2026-02-04 00:49:37 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:37.856274 | orchestrator | 2026-02-04 00:49:37 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:37.857388 | orchestrator | 2026-02-04 00:49:37 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:37.858657 | orchestrator | 2026-02-04 00:49:37 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:37.859369 | orchestrator | 2026-02-04 00:49:37 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:49:37.860752 | orchestrator | 2026-02-04 00:49:37 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:37.860825 | orchestrator | 2026-02-04 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:41.266680 | orchestrator | 2026-02-04 00:49:40 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:41.266798 | orchestrator | 2026-02-04 00:49:40 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:41.266851 | orchestrator | 2026-02-04 00:49:40 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:41.266867 | orchestrator | 2026-02-04 00:49:40 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:41.266880 | orchestrator | 2026-02-04 00:49:40 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:41.266893 | orchestrator | 2026-02-04 00:49:40 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:49:41.266906 | orchestrator | 2026-02-04 00:49:40 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:41.266915 | orchestrator | 2026-02-04 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:44.060655 | orchestrator | 2026-02-04 00:49:44 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:44.060736 | orchestrator | 2026-02-04 00:49:44 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:44.061891 | orchestrator | 2026-02-04 00:49:44 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:44.063951 | orchestrator | 2026-02-04 00:49:44 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:44.064601 | orchestrator | 2026-02-04 00:49:44 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:44.086805 | orchestrator | 2026-02-04 00:49:44 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:49:44.086878 | orchestrator | 2026-02-04 00:49:44 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:44.086885 | orchestrator | 2026-02-04 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:47.206893 | orchestrator | 2026-02-04 00:49:47 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:47.206991 | orchestrator | 2026-02-04 00:49:47 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:47.207572 | orchestrator | 2026-02-04 00:49:47 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:47.208292 | orchestrator | 2026-02-04 00:49:47 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:47.209215 | orchestrator | 2026-02-04 00:49:47 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:47.210205 | orchestrator | 2026-02-04 00:49:47 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:49:47.210623 | orchestrator | 2026-02-04 00:49:47 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:47.210648 | orchestrator | 2026-02-04 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:50.696382 | orchestrator | 2026-02-04 00:49:50 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:50.696462 | orchestrator | 2026-02-04 00:49:50 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:50.696472 | orchestrator | 2026-02-04 00:49:50 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:50.696569 | orchestrator | 2026-02-04 00:49:50 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:50.696579 | orchestrator | 2026-02-04 00:49:50 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:50.696586 | orchestrator | 2026-02-04 00:49:50 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:49:50.696620 | orchestrator | 2026-02-04 00:49:50 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:50.696628 | orchestrator | 2026-02-04 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:53.616141 | orchestrator | 2026-02-04 00:49:53 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:53.616757 | orchestrator | 2026-02-04 00:49:53 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:53.705533 | orchestrator | 2026-02-04 00:49:53 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:53.705622 | orchestrator | 2026-02-04 00:49:53 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:53.705636 | orchestrator | 2026-02-04 00:49:53 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:53.705643 | orchestrator | 2026-02-04 00:49:53 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:49:53.705649 | orchestrator | 2026-02-04 00:49:53 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:53.705656 | orchestrator | 2026-02-04 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:56.771836 | orchestrator | 2026-02-04 00:49:56 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:56.771922 | orchestrator | 2026-02-04 00:49:56 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:56.771931 | orchestrator | 2026-02-04 00:49:56 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:56.771937 | orchestrator | 2026-02-04 00:49:56 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:56.771942 | orchestrator | 2026-02-04 00:49:56 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:56.771947 | orchestrator | 2026-02-04 00:49:56 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:49:56.771953 | orchestrator | 2026-02-04 00:49:56 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:56.771958 | orchestrator | 2026-02-04 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:59.889340 | orchestrator | 2026-02-04 00:49:59 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:49:59.889441 | orchestrator | 2026-02-04 00:49:59 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:49:59.889453 | orchestrator | 2026-02-04 00:49:59 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:49:59.889461 | orchestrator | 2026-02-04 00:49:59 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state STARTED 2026-02-04 00:49:59.889468 | orchestrator | 2026-02-04 00:49:59 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:49:59.889556 | orchestrator | 2026-02-04 00:49:59 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:49:59.889565 | orchestrator | 2026-02-04 00:49:59 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:49:59.889574 | orchestrator | 2026-02-04 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:02.954391 | orchestrator | 2026-02-04 00:50:02 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:50:02.954509 | orchestrator | 2026-02-04 00:50:02 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:02.954552 | orchestrator | 2026-02-04 00:50:02 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:02.954564 | orchestrator | 2026-02-04 00:50:02 | INFO  | Task 5601152d-3e60-4673-9739-7588a0c844bc is in state SUCCESS 2026-02-04 00:50:02.954573 | orchestrator | 2026-02-04 00:50:02 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:02.954583 | orchestrator | 2026-02-04 00:50:02 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:02.954593 | orchestrator | 2026-02-04 00:50:02 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:02.954604 | orchestrator | 2026-02-04 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:06.032375 | orchestrator | 2026-02-04 00:50:06 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:50:06.038788 | orchestrator | 2026-02-04 00:50:06 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:06.038858 | orchestrator | 2026-02-04 00:50:06 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:06.038866 | orchestrator | 2026-02-04 00:50:06 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:06.038872 | orchestrator | 2026-02-04 00:50:06 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:06.039671 | orchestrator | 2026-02-04 00:50:06 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:06.039695 | orchestrator | 2026-02-04 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:09.094932 | orchestrator | 2026-02-04 00:50:09 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state STARTED 2026-02-04 00:50:09.113788 | orchestrator | 2026-02-04 00:50:09 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:09.118093 | orchestrator | 2026-02-04 00:50:09 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:09.124097 | orchestrator | 2026-02-04 00:50:09 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:09.150620 | orchestrator | 2026-02-04 00:50:09 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:09.153012 | orchestrator | 2026-02-04 00:50:09 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:09.153082 | orchestrator | 2026-02-04 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:12.304055 | orchestrator | 2026-02-04 00:50:12 | INFO  | Task d37cb688-62d9-483d-acce-1673d790e376 is in state SUCCESS 2026-02-04 00:50:12.309825 | orchestrator | 2026-02-04 00:50:12 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:12.327436 | orchestrator | 2026-02-04 00:50:12 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:12.334573 | orchestrator | 2026-02-04 00:50:12 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:12.343458 | orchestrator | 2026-02-04 00:50:12 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:12.354308 | orchestrator | 2026-02-04 00:50:12 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:12.354363 | orchestrator | 2026-02-04 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:15.404344 | orchestrator | 2026-02-04 00:50:15 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:15.407446 | orchestrator | 2026-02-04 00:50:15 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:15.410997 | orchestrator | 2026-02-04 00:50:15 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:15.413355 | orchestrator | 2026-02-04 00:50:15 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:15.414990 | orchestrator | 2026-02-04 00:50:15 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:15.415135 | orchestrator | 2026-02-04 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:18.474683 | orchestrator | 2026-02-04 00:50:18 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:18.474775 | orchestrator | 2026-02-04 00:50:18 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:18.475527 | orchestrator | 2026-02-04 00:50:18 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:18.479504 | orchestrator | 2026-02-04 00:50:18 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:18.479589 | orchestrator | 2026-02-04 00:50:18 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:18.479609 | orchestrator | 2026-02-04 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:21.591872 | orchestrator | 2026-02-04 00:50:21 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:21.592815 | orchestrator | 2026-02-04 00:50:21 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:21.592846 | orchestrator | 2026-02-04 00:50:21 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:21.592854 | orchestrator | 2026-02-04 00:50:21 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:21.592863 | orchestrator | 2026-02-04 00:50:21 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:21.592870 | orchestrator | 2026-02-04 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:24.592281 | orchestrator | 2026-02-04 00:50:24 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:24.597159 | orchestrator | 2026-02-04 00:50:24 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:24.600128 | orchestrator | 2026-02-04 00:50:24 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:24.602812 | orchestrator | 2026-02-04 00:50:24 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:24.605456 | orchestrator | 2026-02-04 00:50:24 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:24.605542 | orchestrator | 2026-02-04 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:27.722605 | orchestrator | 2026-02-04 00:50:27 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:27.722713 | orchestrator | 2026-02-04 00:50:27 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:27.722722 | orchestrator | 2026-02-04 00:50:27 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:27.722729 | orchestrator | 2026-02-04 00:50:27 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:27.723594 | orchestrator | 2026-02-04 00:50:27 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:27.723679 | orchestrator | 2026-02-04 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:30.802124 | orchestrator | 2026-02-04 00:50:30 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:30.804343 | orchestrator | 2026-02-04 00:50:30 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:30.808339 | orchestrator | 2026-02-04 00:50:30 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:30.808380 | orchestrator | 2026-02-04 00:50:30 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:30.809915 | orchestrator | 2026-02-04 00:50:30 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:30.810482 | orchestrator | 2026-02-04 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:33.905378 | orchestrator | 2026-02-04 00:50:33 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:33.910191 | orchestrator | 2026-02-04 00:50:33 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:33.915586 | orchestrator | 2026-02-04 00:50:33 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:33.918132 | orchestrator | 2026-02-04 00:50:33 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:33.921326 | orchestrator | 2026-02-04 00:50:33 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:33.921366 | orchestrator | 2026-02-04 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:36.992516 | orchestrator | 2026-02-04 00:50:36 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:36.992560 | orchestrator | 2026-02-04 00:50:36 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:36.997545 | orchestrator | 2026-02-04 00:50:36 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:36.997597 | orchestrator | 2026-02-04 00:50:36 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:36.997604 | orchestrator | 2026-02-04 00:50:36 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:36.997610 | orchestrator | 2026-02-04 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:40.073182 | orchestrator | 2026-02-04 00:50:40 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:40.075083 | orchestrator | 2026-02-04 00:50:40 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:40.077626 | orchestrator | 2026-02-04 00:50:40 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:40.078962 | orchestrator | 2026-02-04 00:50:40 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:40.080023 | orchestrator | 2026-02-04 00:50:40 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:40.080844 | orchestrator | 2026-02-04 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:43.187680 | orchestrator | 2026-02-04 00:50:43 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:43.191879 | orchestrator | 2026-02-04 00:50:43 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:43.192929 | orchestrator | 2026-02-04 00:50:43 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:43.194574 | orchestrator | 2026-02-04 00:50:43 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:43.196184 | orchestrator | 2026-02-04 00:50:43 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:43.197516 | orchestrator | 2026-02-04 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:46.338920 | orchestrator | 2026-02-04 00:50:46 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:46.345352 | orchestrator | 2026-02-04 00:50:46 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:46.349316 | orchestrator | 2026-02-04 00:50:46 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:46.354865 | orchestrator | 2026-02-04 00:50:46 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:46.358424 | orchestrator | 2026-02-04 00:50:46 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:46.359156 | orchestrator | 2026-02-04 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:49.524052 | orchestrator | 2026-02-04 00:50:49 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:49.586606 | orchestrator | 2026-02-04 00:50:49 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:49.586683 | orchestrator | 2026-02-04 00:50:49 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:49.586689 | orchestrator | 2026-02-04 00:50:49 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:49.586694 | orchestrator | 2026-02-04 00:50:49 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:49.586700 | orchestrator | 2026-02-04 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:52.691693 | orchestrator | 2026-02-04 00:50:52 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:52.691765 | orchestrator | 2026-02-04 00:50:52 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:52.691787 | orchestrator | 2026-02-04 00:50:52 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:52.691792 | orchestrator | 2026-02-04 00:50:52 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:52.691796 | orchestrator | 2026-02-04 00:50:52 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:52.691801 | orchestrator | 2026-02-04 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:55.807390 | orchestrator | 2026-02-04 00:50:55 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:55.817173 | orchestrator | 2026-02-04 00:50:55 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:55.824366 | orchestrator | 2026-02-04 00:50:55 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:55.830158 | orchestrator | 2026-02-04 00:50:55 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:55.833486 | orchestrator | 2026-02-04 00:50:55 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:55.833566 | orchestrator | 2026-02-04 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:58.899556 | orchestrator | 2026-02-04 00:50:58 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:50:58.903208 | orchestrator | 2026-02-04 00:50:58 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:50:58.907858 | orchestrator | 2026-02-04 00:50:58 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:50:58.911029 | orchestrator | 2026-02-04 00:50:58 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:50:58.914341 | orchestrator | 2026-02-04 00:50:58 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:50:58.914513 | orchestrator | 2026-02-04 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:02.072001 | orchestrator | 2026-02-04 00:51:02 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:02.075854 | orchestrator | 2026-02-04 00:51:02 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:51:02.075970 | orchestrator | 2026-02-04 00:51:02 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:02.079339 | orchestrator | 2026-02-04 00:51:02 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:51:02.085410 | orchestrator | 2026-02-04 00:51:02 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:02.085796 | orchestrator | 2026-02-04 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:05.145178 | orchestrator | 2026-02-04 00:51:05 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:05.148952 | orchestrator | 2026-02-04 00:51:05 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state STARTED 2026-02-04 00:51:05.150335 | orchestrator | 2026-02-04 00:51:05 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:05.154553 | orchestrator | 2026-02-04 00:51:05 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state STARTED 2026-02-04 00:51:05.158850 | orchestrator | 2026-02-04 00:51:05 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:05.158893 | orchestrator | 2026-02-04 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:08.207542 | orchestrator | 2026-02-04 00:51:08 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:08.208739 | orchestrator | 2026-02-04 00:51:08 | INFO  | Task 9562b1ee-2a99-40af-99c4-6732e6ebbba7 is in state SUCCESS 2026-02-04 00:51:08.209736 | orchestrator | 2026-02-04 00:51:08.209772 | orchestrator | 2026-02-04 00:51:08.209777 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-02-04 00:51:08.209783 | orchestrator | 2026-02-04 00:51:08.209789 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-02-04 00:51:08.209797 | orchestrator | Wednesday 04 February 2026 00:49:02 +0000 (0:00:01.360) 0:00:01.360 **** 2026-02-04 00:51:08.209804 | orchestrator | ok: [testbed-manager] => { 2026-02-04 00:51:08.209813 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-02-04 00:51:08.209823 | orchestrator | } 2026-02-04 00:51:08.209827 | orchestrator | 2026-02-04 00:51:08.209832 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-02-04 00:51:08.209836 | orchestrator | Wednesday 04 February 2026 00:49:03 +0000 (0:00:00.392) 0:00:01.753 **** 2026-02-04 00:51:08.209840 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:08.209846 | orchestrator | 2026-02-04 00:51:08.209862 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-02-04 00:51:08.209866 | orchestrator | Wednesday 04 February 2026 00:49:06 +0000 (0:00:03.415) 0:00:05.168 **** 2026-02-04 00:51:08.209870 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-02-04 00:51:08.209874 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-02-04 00:51:08.209895 | orchestrator | 2026-02-04 00:51:08.209900 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-02-04 00:51:08.209904 | orchestrator | Wednesday 04 February 2026 00:49:08 +0000 (0:00:02.066) 0:00:07.234 **** 2026-02-04 00:51:08.209908 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.209911 | orchestrator | 2026-02-04 00:51:08.209915 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-02-04 00:51:08.209919 | orchestrator | Wednesday 04 February 2026 00:49:15 +0000 (0:00:06.894) 0:00:14.129 **** 2026-02-04 00:51:08.209923 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.209927 | orchestrator | 2026-02-04 00:51:08.209931 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-02-04 00:51:08.209934 | orchestrator | Wednesday 04 February 2026 00:49:17 +0000 (0:00:02.000) 0:00:16.129 **** 2026-02-04 00:51:08.209938 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-02-04 00:51:08.209942 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:08.209946 | orchestrator | 2026-02-04 00:51:08.209950 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-02-04 00:51:08.209954 | orchestrator | Wednesday 04 February 2026 00:49:51 +0000 (0:00:33.986) 0:00:50.115 **** 2026-02-04 00:51:08.209957 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.209961 | orchestrator | 2026-02-04 00:51:08.209965 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:51:08.209969 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:51:08.209974 | orchestrator | 2026-02-04 00:51:08.209978 | orchestrator | 2026-02-04 00:51:08.209982 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:51:08.209986 | orchestrator | Wednesday 04 February 2026 00:49:59 +0000 (0:00:07.975) 0:00:58.091 **** 2026-02-04 00:51:08.209990 | orchestrator | =============================================================================== 2026-02-04 00:51:08.209993 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 33.99s 2026-02-04 00:51:08.209997 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 7.98s 2026-02-04 00:51:08.210001 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 6.89s 2026-02-04 00:51:08.210005 | orchestrator | osism.services.homer : Create traefik external network ------------------ 3.42s 2026-02-04 00:51:08.210009 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.07s 2026-02-04 00:51:08.210055 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.00s 2026-02-04 00:51:08.210067 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.39s 2026-02-04 00:51:08.210073 | orchestrator | 2026-02-04 00:51:08.210079 | orchestrator | 2026-02-04 00:51:08.210085 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-04 00:51:08.210092 | orchestrator | 2026-02-04 00:51:08.210098 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-04 00:51:08.210104 | orchestrator | Wednesday 04 February 2026 00:49:03 +0000 (0:00:01.316) 0:00:01.316 **** 2026-02-04 00:51:08.210111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-04 00:51:08.210118 | orchestrator | 2026-02-04 00:51:08.210125 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-04 00:51:08.210131 | orchestrator | Wednesday 04 February 2026 00:49:04 +0000 (0:00:00.799) 0:00:02.116 **** 2026-02-04 00:51:08.210137 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-04 00:51:08.210144 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-04 00:51:08.210149 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-04 00:51:08.210159 | orchestrator | 2026-02-04 00:51:08.210163 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-04 00:51:08.210167 | orchestrator | Wednesday 04 February 2026 00:49:07 +0000 (0:00:03.426) 0:00:05.542 **** 2026-02-04 00:51:08.210171 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.210175 | orchestrator | 2026-02-04 00:51:08.210179 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-04 00:51:08.210182 | orchestrator | Wednesday 04 February 2026 00:49:13 +0000 (0:00:05.691) 0:00:11.233 **** 2026-02-04 00:51:08.210197 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-04 00:51:08.210202 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:08.210206 | orchestrator | 2026-02-04 00:51:08.210210 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-04 00:51:08.210214 | orchestrator | Wednesday 04 February 2026 00:49:55 +0000 (0:00:42.628) 0:00:53.862 **** 2026-02-04 00:51:08.210218 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.210221 | orchestrator | 2026-02-04 00:51:08.210225 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-04 00:51:08.210229 | orchestrator | Wednesday 04 February 2026 00:50:01 +0000 (0:00:05.384) 0:00:59.246 **** 2026-02-04 00:51:08.210233 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:08.210237 | orchestrator | 2026-02-04 00:51:08.210241 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-04 00:51:08.210245 | orchestrator | Wednesday 04 February 2026 00:50:02 +0000 (0:00:01.784) 0:01:01.031 **** 2026-02-04 00:51:08.210252 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.210256 | orchestrator | 2026-02-04 00:51:08.210260 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-04 00:51:08.210264 | orchestrator | Wednesday 04 February 2026 00:50:05 +0000 (0:00:02.673) 0:01:03.704 **** 2026-02-04 00:51:08.210268 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.210272 | orchestrator | 2026-02-04 00:51:08.210276 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-04 00:51:08.210280 | orchestrator | Wednesday 04 February 2026 00:50:06 +0000 (0:00:01.047) 0:01:04.752 **** 2026-02-04 00:51:08.210284 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.210288 | orchestrator | 2026-02-04 00:51:08.210292 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-04 00:51:08.210295 | orchestrator | Wednesday 04 February 2026 00:50:07 +0000 (0:00:00.922) 0:01:05.675 **** 2026-02-04 00:51:08.210299 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:08.210303 | orchestrator | 2026-02-04 00:51:08.210307 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:51:08.210311 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:51:08.210315 | orchestrator | 2026-02-04 00:51:08.210319 | orchestrator | 2026-02-04 00:51:08.210322 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:51:08.210326 | orchestrator | Wednesday 04 February 2026 00:50:08 +0000 (0:00:00.488) 0:01:06.163 **** 2026-02-04 00:51:08.210330 | orchestrator | =============================================================================== 2026-02-04 00:51:08.210334 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 42.63s 2026-02-04 00:51:08.210338 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 5.69s 2026-02-04 00:51:08.210342 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 5.38s 2026-02-04 00:51:08.210345 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.43s 2026-02-04 00:51:08.210349 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.67s 2026-02-04 00:51:08.210353 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.78s 2026-02-04 00:51:08.210357 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.05s 2026-02-04 00:51:08.210364 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.92s 2026-02-04 00:51:08.210368 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.80s 2026-02-04 00:51:08.210371 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.49s 2026-02-04 00:51:08.210375 | orchestrator | 2026-02-04 00:51:08.210660 | orchestrator | 2026-02-04 00:51:08.210672 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:51:08.210677 | orchestrator | 2026-02-04 00:51:08.210681 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:51:08.210685 | orchestrator | Wednesday 04 February 2026 00:49:04 +0000 (0:00:01.409) 0:00:01.409 **** 2026-02-04 00:51:08.210689 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-04 00:51:08.210693 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-04 00:51:08.210697 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-04 00:51:08.210701 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-04 00:51:08.210705 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-04 00:51:08.210708 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-04 00:51:08.210712 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-04 00:51:08.210716 | orchestrator | 2026-02-04 00:51:08.210720 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-04 00:51:08.210724 | orchestrator | 2026-02-04 00:51:08.210728 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-04 00:51:08.210732 | orchestrator | Wednesday 04 February 2026 00:49:07 +0000 (0:00:03.452) 0:00:04.861 **** 2026-02-04 00:51:08.210745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-02-04 00:51:08.210753 | orchestrator | 2026-02-04 00:51:08.210757 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-04 00:51:08.210761 | orchestrator | Wednesday 04 February 2026 00:49:11 +0000 (0:00:03.768) 0:00:08.629 **** 2026-02-04 00:51:08.210765 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:08.210769 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:51:08.210773 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:51:08.210777 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:51:08.210781 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:51:08.210785 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:51:08.210789 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:51:08.210793 | orchestrator | 2026-02-04 00:51:08.210796 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-04 00:51:08.210800 | orchestrator | Wednesday 04 February 2026 00:49:17 +0000 (0:00:06.105) 0:00:14.735 **** 2026-02-04 00:51:08.210804 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:51:08.210808 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:51:08.210812 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:51:08.210816 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:51:08.210820 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:51:08.210824 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:51:08.210827 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:08.210831 | orchestrator | 2026-02-04 00:51:08.210835 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-04 00:51:08.210839 | orchestrator | Wednesday 04 February 2026 00:49:24 +0000 (0:00:07.095) 0:00:21.830 **** 2026-02-04 00:51:08.210846 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:08.210850 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:08.210854 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:08.210858 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:08.210862 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:08.210866 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:08.210874 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.210878 | orchestrator | 2026-02-04 00:51:08.210882 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-04 00:51:08.210886 | orchestrator | Wednesday 04 February 2026 00:49:28 +0000 (0:00:03.844) 0:00:25.675 **** 2026-02-04 00:51:08.210889 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:08.210893 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:08.210897 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:08.210901 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:08.210905 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:08.210908 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:08.210912 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.210916 | orchestrator | 2026-02-04 00:51:08.210920 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-04 00:51:08.210924 | orchestrator | Wednesday 04 February 2026 00:49:45 +0000 (0:00:16.709) 0:00:42.384 **** 2026-02-04 00:51:08.210928 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:08.210931 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:08.210935 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:08.210939 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:08.210943 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:08.210947 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:08.210950 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.210954 | orchestrator | 2026-02-04 00:51:08.210958 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-04 00:51:08.210962 | orchestrator | Wednesday 04 February 2026 00:50:34 +0000 (0:00:49.027) 0:01:31.412 **** 2026-02-04 00:51:08.210967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:51:08.210972 | orchestrator | 2026-02-04 00:51:08.210976 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-04 00:51:08.210980 | orchestrator | Wednesday 04 February 2026 00:50:35 +0000 (0:00:01.805) 0:01:33.217 **** 2026-02-04 00:51:08.210984 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-04 00:51:08.210988 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-04 00:51:08.210992 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-04 00:51:08.210995 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-04 00:51:08.211003 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-04 00:51:08.211007 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-04 00:51:08.211011 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-04 00:51:08.211015 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-04 00:51:08.211019 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-04 00:51:08.211025 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-04 00:51:08.211031 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-04 00:51:08.211037 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-04 00:51:08.211043 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-04 00:51:08.211049 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-04 00:51:08.211055 | orchestrator | 2026-02-04 00:51:08.211061 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-04 00:51:08.211068 | orchestrator | Wednesday 04 February 2026 00:50:43 +0000 (0:00:07.532) 0:01:40.750 **** 2026-02-04 00:51:08.211074 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:51:08.211080 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:08.211086 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:51:08.211093 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:51:08.211099 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:51:08.211110 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:51:08.211116 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:51:08.211120 | orchestrator | 2026-02-04 00:51:08.211124 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-04 00:51:08.211128 | orchestrator | Wednesday 04 February 2026 00:50:46 +0000 (0:00:02.571) 0:01:43.322 **** 2026-02-04 00:51:08.211132 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:08.211136 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:08.211140 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.211143 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:08.211147 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:08.211151 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:08.211155 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:08.211159 | orchestrator | 2026-02-04 00:51:08.211163 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-04 00:51:08.211166 | orchestrator | Wednesday 04 February 2026 00:50:47 +0000 (0:00:01.882) 0:01:45.205 **** 2026-02-04 00:51:08.211170 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:51:08.211174 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:51:08.211178 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:51:08.211182 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:51:08.211185 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:51:08.211189 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:51:08.211193 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:08.211197 | orchestrator | 2026-02-04 00:51:08.211201 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-04 00:51:08.211205 | orchestrator | Wednesday 04 February 2026 00:50:50 +0000 (0:00:02.442) 0:01:47.647 **** 2026-02-04 00:51:08.211209 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:51:08.211212 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:08.211216 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:51:08.211220 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:51:08.211224 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:51:08.211228 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:51:08.211232 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:51:08.211235 | orchestrator | 2026-02-04 00:51:08.211242 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-04 00:51:08.211246 | orchestrator | Wednesday 04 February 2026 00:50:53 +0000 (0:00:03.412) 0:01:51.060 **** 2026-02-04 00:51:08.211250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-04 00:51:08.211256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:51:08.211260 | orchestrator | 2026-02-04 00:51:08.211264 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-04 00:51:08.211268 | orchestrator | Wednesday 04 February 2026 00:50:56 +0000 (0:00:02.633) 0:01:53.693 **** 2026-02-04 00:51:08.211271 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.211275 | orchestrator | 2026-02-04 00:51:08.211279 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-04 00:51:08.211283 | orchestrator | Wednesday 04 February 2026 00:51:00 +0000 (0:00:03.686) 0:01:57.380 **** 2026-02-04 00:51:08.211287 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:08.211292 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:08.211296 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:08.211301 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:08.211305 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:08.211310 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:08.211314 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:08.211319 | orchestrator | 2026-02-04 00:51:08.211324 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:51:08.211328 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:51:08.211338 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:51:08.211343 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:51:08.211348 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:51:08.211357 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:51:08.211364 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:51:08.211370 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:51:08.211378 | orchestrator | 2026-02-04 00:51:08.211387 | orchestrator | 2026-02-04 00:51:08.211393 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:51:08.211399 | orchestrator | Wednesday 04 February 2026 00:51:04 +0000 (0:00:04.255) 0:02:01.635 **** 2026-02-04 00:51:08.211405 | orchestrator | =============================================================================== 2026-02-04 00:51:08.211411 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 49.03s 2026-02-04 00:51:08.211418 | orchestrator | osism.services.netdata : Add repository -------------------------------- 16.71s 2026-02-04 00:51:08.211424 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.53s 2026-02-04 00:51:08.211431 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 7.10s 2026-02-04 00:51:08.211437 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 6.11s 2026-02-04 00:51:08.211466 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 4.26s 2026-02-04 00:51:08.211471 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.84s 2026-02-04 00:51:08.211474 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.77s 2026-02-04 00:51:08.211478 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.69s 2026-02-04 00:51:08.211482 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.45s 2026-02-04 00:51:08.211486 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.41s 2026-02-04 00:51:08.211490 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.63s 2026-02-04 00:51:08.211493 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.57s 2026-02-04 00:51:08.211497 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.44s 2026-02-04 00:51:08.211501 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.88s 2026-02-04 00:51:08.211505 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.81s 2026-02-04 00:51:08.214071 | orchestrator | 2026-02-04 00:51:08 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:08.215877 | orchestrator | 2026-02-04 00:51:08 | INFO  | Task 431c1300-0116-4e0e-8b3c-d7ea772e44bc is in state SUCCESS 2026-02-04 00:51:08.218218 | orchestrator | 2026-02-04 00:51:08 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:08.218266 | orchestrator | 2026-02-04 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:11.254633 | orchestrator | 2026-02-04 00:51:11 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:11.256708 | orchestrator | 2026-02-04 00:51:11 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:11.258657 | orchestrator | 2026-02-04 00:51:11 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:11.258733 | orchestrator | 2026-02-04 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:14.318811 | orchestrator | 2026-02-04 00:51:14 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:14.320670 | orchestrator | 2026-02-04 00:51:14 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:14.321782 | orchestrator | 2026-02-04 00:51:14 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:14.322372 | orchestrator | 2026-02-04 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:17.382172 | orchestrator | 2026-02-04 00:51:17 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:17.382561 | orchestrator | 2026-02-04 00:51:17 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:17.383592 | orchestrator | 2026-02-04 00:51:17 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:17.384182 | orchestrator | 2026-02-04 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:20.448323 | orchestrator | 2026-02-04 00:51:20 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:20.448371 | orchestrator | 2026-02-04 00:51:20 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:20.448378 | orchestrator | 2026-02-04 00:51:20 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:20.448382 | orchestrator | 2026-02-04 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:23.554572 | orchestrator | 2026-02-04 00:51:23 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:23.559216 | orchestrator | 2026-02-04 00:51:23 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:23.563325 | orchestrator | 2026-02-04 00:51:23 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:23.563369 | orchestrator | 2026-02-04 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:26.724314 | orchestrator | 2026-02-04 00:51:26 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:26.729217 | orchestrator | 2026-02-04 00:51:26 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:26.731046 | orchestrator | 2026-02-04 00:51:26 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:26.731088 | orchestrator | 2026-02-04 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:29.823263 | orchestrator | 2026-02-04 00:51:29 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:29.823624 | orchestrator | 2026-02-04 00:51:29 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:29.824484 | orchestrator | 2026-02-04 00:51:29 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:29.824965 | orchestrator | 2026-02-04 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:32.872873 | orchestrator | 2026-02-04 00:51:32 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:32.874273 | orchestrator | 2026-02-04 00:51:32 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:32.875682 | orchestrator | 2026-02-04 00:51:32 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:32.875981 | orchestrator | 2026-02-04 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:35.919186 | orchestrator | 2026-02-04 00:51:35 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:35.919494 | orchestrator | 2026-02-04 00:51:35 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:35.921788 | orchestrator | 2026-02-04 00:51:35 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state STARTED 2026-02-04 00:51:35.921908 | orchestrator | 2026-02-04 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:38.971176 | orchestrator | 2026-02-04 00:51:38 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:38.971514 | orchestrator | 2026-02-04 00:51:38 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:38.978805 | orchestrator | 2026-02-04 00:51:38 | INFO  | Task 3738b9ef-0976-4709-8642-f9c421f1f746 is in state SUCCESS 2026-02-04 00:51:38.980614 | orchestrator | 2026-02-04 00:51:38.980662 | orchestrator | 2026-02-04 00:51:38.980670 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-02-04 00:51:38.980677 | orchestrator | 2026-02-04 00:51:38.980683 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-02-04 00:51:38.980690 | orchestrator | Wednesday 04 February 2026 00:49:34 +0000 (0:00:00.332) 0:00:00.332 **** 2026-02-04 00:51:38.980697 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:38.980704 | orchestrator | 2026-02-04 00:51:38.980710 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-02-04 00:51:38.980716 | orchestrator | Wednesday 04 February 2026 00:49:36 +0000 (0:00:01.987) 0:00:02.319 **** 2026-02-04 00:51:38.980771 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-02-04 00:51:38.980777 | orchestrator | 2026-02-04 00:51:38.980784 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-02-04 00:51:38.980790 | orchestrator | Wednesday 04 February 2026 00:49:38 +0000 (0:00:01.045) 0:00:03.365 **** 2026-02-04 00:51:38.980796 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:38.980802 | orchestrator | 2026-02-04 00:51:38.980808 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-02-04 00:51:38.980814 | orchestrator | Wednesday 04 February 2026 00:49:39 +0000 (0:00:01.964) 0:00:05.329 **** 2026-02-04 00:51:38.980820 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-02-04 00:51:38.980827 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:38.980833 | orchestrator | 2026-02-04 00:51:38.980839 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-02-04 00:51:38.980887 | orchestrator | Wednesday 04 February 2026 00:50:45 +0000 (0:01:05.222) 0:01:10.551 **** 2026-02-04 00:51:38.980894 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:38.980900 | orchestrator | 2026-02-04 00:51:38.980905 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:51:38.980912 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:51:38.980919 | orchestrator | 2026-02-04 00:51:38.980925 | orchestrator | 2026-02-04 00:51:38.980931 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:51:38.980937 | orchestrator | Wednesday 04 February 2026 00:51:04 +0000 (0:00:19.197) 0:01:29.748 **** 2026-02-04 00:51:38.980943 | orchestrator | =============================================================================== 2026-02-04 00:51:38.980949 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 65.22s 2026-02-04 00:51:38.980954 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 19.20s 2026-02-04 00:51:38.980973 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.99s 2026-02-04 00:51:38.980980 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.96s 2026-02-04 00:51:38.980986 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.05s 2026-02-04 00:51:38.980992 | orchestrator | 2026-02-04 00:51:38.980998 | orchestrator | 2026-02-04 00:51:38.981004 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-04 00:51:38.981010 | orchestrator | 2026-02-04 00:51:38.981015 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-04 00:51:38.981021 | orchestrator | Wednesday 04 February 2026 00:48:50 +0000 (0:00:00.313) 0:00:00.313 **** 2026-02-04 00:51:38.981027 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:51:38.981034 | orchestrator | 2026-02-04 00:51:38.981040 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-04 00:51:38.981046 | orchestrator | Wednesday 04 February 2026 00:48:51 +0000 (0:00:01.807) 0:00:02.120 **** 2026-02-04 00:51:38.981052 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:51:38.981058 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:51:38.981064 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:51:38.981069 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:51:38.981075 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:51:38.981081 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:51:38.981087 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:51:38.981094 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:51:38.981100 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:51:38.981105 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:51:38.981116 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:51:38.981122 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:51:38.981128 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:51:38.981134 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:51:38.981140 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:51:38.981146 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:51:38.981161 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:51:38.981168 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:51:38.981174 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:51:38.981180 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:51:38.981186 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:51:38.981192 | orchestrator | 2026-02-04 00:51:38.981198 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-04 00:51:38.981203 | orchestrator | Wednesday 04 February 2026 00:48:58 +0000 (0:00:06.711) 0:00:08.832 **** 2026-02-04 00:51:38.981211 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:51:38.981225 | orchestrator | 2026-02-04 00:51:38.981232 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-04 00:51:38.981239 | orchestrator | Wednesday 04 February 2026 00:49:00 +0000 (0:00:01.968) 0:00:10.800 **** 2026-02-04 00:51:38.981248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.981258 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.981266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.981273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.981280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.981289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.981301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981312 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.981320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981341 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981410 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981445 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.981452 | orchestrator | 2026-02-04 00:51:38.981459 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-04 00:51:38.981466 | orchestrator | Wednesday 04 February 2026 00:49:08 +0000 (0:00:07.574) 0:00:18.375 **** 2026-02-04 00:51:38.981478 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.981490 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981497 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981504 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:51:38.981512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.981519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981533 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:51:38.981641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.981655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.981686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981698 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:51:38.981704 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:51:38.981710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.981717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981753 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:51:38.981759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.981769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981782 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:51:38.981788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.981795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981807 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:51:38.981813 | orchestrator | 2026-02-04 00:51:38.981819 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-04 00:51:38.981825 | orchestrator | Wednesday 04 February 2026 00:49:11 +0000 (0:00:03.600) 0:00:21.976 **** 2026-02-04 00:51:38.981831 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.981841 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981850 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981857 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:51:38.981867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.981873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981885 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:51:38.981891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.981897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.981916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.982260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.982276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.982282 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:51:38.982289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.982295 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:51:38.982301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.982307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.982319 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:51:38.982325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.982334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.982345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.982351 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:51:38.982357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:51:38.982364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.982370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.982376 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:51:38.982382 | orchestrator | 2026-02-04 00:51:38.982388 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-04 00:51:38.982394 | orchestrator | Wednesday 04 February 2026 00:49:15 +0000 (0:00:03.875) 0:00:25.852 **** 2026-02-04 00:51:38.982400 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:51:38.982406 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:51:38.982412 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:51:38.982435 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:51:38.982441 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:51:38.982453 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:51:38.982459 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:51:38.982465 | orchestrator | 2026-02-04 00:51:38.982471 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-04 00:51:38.982477 | orchestrator | Wednesday 04 February 2026 00:49:17 +0000 (0:00:01.492) 0:00:27.344 **** 2026-02-04 00:51:38.982483 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:51:38.982489 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:51:38.982494 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:51:38.982500 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:51:38.982506 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:51:38.982512 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:51:38.982518 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:51:38.982524 | orchestrator | 2026-02-04 00:51:38.982530 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-04 00:51:38.982536 | orchestrator | Wednesday 04 February 2026 00:49:19 +0000 (0:00:02.143) 0:00:29.488 **** 2026-02-04 00:51:38.982542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.982551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.982564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.982571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.982577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.982584 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.982593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982600 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.982606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982686 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982693 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982699 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.982709 | orchestrator | 2026-02-04 00:51:38.982715 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-04 00:51:38.982721 | orchestrator | Wednesday 04 February 2026 00:49:31 +0000 (0:00:12.542) 0:00:42.030 **** 2026-02-04 00:51:38.982728 | orchestrator | [WARNING]: Skipped 2026-02-04 00:51:38.982734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-04 00:51:38.982740 | orchestrator | to this access issue: 2026-02-04 00:51:38.982746 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-04 00:51:38.982752 | orchestrator | directory 2026-02-04 00:51:38.982758 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:51:38.982764 | orchestrator | 2026-02-04 00:51:38.982770 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-04 00:51:38.982776 | orchestrator | Wednesday 04 February 2026 00:49:35 +0000 (0:00:03.300) 0:00:45.330 **** 2026-02-04 00:51:38.982782 | orchestrator | [WARNING]: Skipped 2026-02-04 00:51:38.982788 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-04 00:51:38.982794 | orchestrator | to this access issue: 2026-02-04 00:51:38.982800 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-04 00:51:38.982806 | orchestrator | directory 2026-02-04 00:51:38.982812 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:51:38.982817 | orchestrator | 2026-02-04 00:51:38.982823 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-04 00:51:38.982829 | orchestrator | Wednesday 04 February 2026 00:49:36 +0000 (0:00:01.137) 0:00:46.468 **** 2026-02-04 00:51:38.982835 | orchestrator | [WARNING]: Skipped 2026-02-04 00:51:38.982841 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-04 00:51:38.982847 | orchestrator | to this access issue: 2026-02-04 00:51:38.982853 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-04 00:51:38.982858 | orchestrator | directory 2026-02-04 00:51:38.982864 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:51:38.982870 | orchestrator | 2026-02-04 00:51:38.982876 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-04 00:51:38.982882 | orchestrator | Wednesday 04 February 2026 00:49:37 +0000 (0:00:01.421) 0:00:47.891 **** 2026-02-04 00:51:38.982888 | orchestrator | [WARNING]: Skipped 2026-02-04 00:51:38.982894 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-04 00:51:38.982900 | orchestrator | to this access issue: 2026-02-04 00:51:38.982906 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-04 00:51:38.982911 | orchestrator | directory 2026-02-04 00:51:38.982917 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:51:38.982924 | orchestrator | 2026-02-04 00:51:38.982930 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-04 00:51:38.982935 | orchestrator | Wednesday 04 February 2026 00:49:38 +0000 (0:00:01.254) 0:00:49.145 **** 2026-02-04 00:51:38.982941 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:38.982947 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:38.982953 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:38.982959 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:38.982965 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:38.982973 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:38.982979 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:38.982985 | orchestrator | 2026-02-04 00:51:38.982991 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-04 00:51:38.983003 | orchestrator | Wednesday 04 February 2026 00:49:46 +0000 (0:00:07.991) 0:00:57.137 **** 2026-02-04 00:51:38.983009 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:51:38.983015 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:51:38.983021 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:51:38.983032 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:51:38.983038 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:51:38.983044 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:51:38.983050 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:51:38.983056 | orchestrator | 2026-02-04 00:51:38.983062 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-04 00:51:38.983068 | orchestrator | Wednesday 04 February 2026 00:49:54 +0000 (0:00:07.794) 0:01:04.931 **** 2026-02-04 00:51:38.983074 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:38.983079 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:38.983085 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:38.983091 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:38.983097 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:38.983103 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:38.983109 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:38.983114 | orchestrator | 2026-02-04 00:51:38.983120 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-04 00:51:38.983126 | orchestrator | Wednesday 04 February 2026 00:49:59 +0000 (0:00:04.271) 0:01:09.203 **** 2026-02-04 00:51:38.983133 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.983146 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.983164 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983175 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983182 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983189 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.983201 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.983207 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.983223 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983232 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983238 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.983254 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:51:38.983267 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983276 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983284 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983291 | orchestrator | 2026-02-04 00:51:38.983297 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-04 00:51:38.983303 | orchestrator | Wednesday 04 February 2026 00:50:02 +0000 (0:00:03.418) 0:01:12.622 **** 2026-02-04 00:51:38.983309 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:51:38.983315 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:51:38.983321 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:51:38.983329 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:51:38.983336 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:51:38.983341 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:51:38.983347 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:51:38.983353 | orchestrator | 2026-02-04 00:51:38.983359 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-04 00:51:38.983365 | orchestrator | Wednesday 04 February 2026 00:50:05 +0000 (0:00:03.157) 0:01:15.779 **** 2026-02-04 00:51:38.983371 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:51:38.983377 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:51:38.983383 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:51:38.983389 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:51:38.983395 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:51:38.983401 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:51:38.983407 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:51:38.983413 | orchestrator | 2026-02-04 00:51:38.983439 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-04 00:51:38.983446 | orchestrator | Wednesday 04 February 2026 00:50:08 +0000 (0:00:03.157) 0:01:18.937 **** 2026-02-04 00:51:38.983452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983465 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983516 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983558 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:51:38.983595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:51:38.983623 | orchestrator | 2026-02-04 00:51:38.983633 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-04 00:51:38.983639 | orchestrator | Wednesday 04 February 2026 00:50:15 +0000 (0:00:06.751) 0:01:25.688 **** 2026-02-04 00:51:38.983646 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:38.983653 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:38.983662 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:38.983671 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:38.983677 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:38.983693 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:38.983699 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:38.983705 | orchestrator | 2026-02-04 00:51:38.983711 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-04 00:51:38.983717 | orchestrator | Wednesday 04 February 2026 00:50:17 +0000 (0:00:02.128) 0:01:27.817 **** 2026-02-04 00:51:38.983729 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:38.983735 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:38.983741 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:38.983747 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:38.983753 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:38.983762 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:38.983768 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:38.983774 | orchestrator | 2026-02-04 00:51:38.983780 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:51:38.983786 | orchestrator | Wednesday 04 February 2026 00:50:19 +0000 (0:00:01.471) 0:01:29.289 **** 2026-02-04 00:51:38.983792 | orchestrator | 2026-02-04 00:51:38.983798 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:51:38.983804 | orchestrator | Wednesday 04 February 2026 00:50:19 +0000 (0:00:00.089) 0:01:29.378 **** 2026-02-04 00:51:38.983810 | orchestrator | 2026-02-04 00:51:38.983816 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:51:38.983822 | orchestrator | Wednesday 04 February 2026 00:50:19 +0000 (0:00:00.079) 0:01:29.458 **** 2026-02-04 00:51:38.983829 | orchestrator | 2026-02-04 00:51:38.983835 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:51:38.983840 | orchestrator | Wednesday 04 February 2026 00:50:19 +0000 (0:00:00.278) 0:01:29.737 **** 2026-02-04 00:51:38.983846 | orchestrator | 2026-02-04 00:51:38.983852 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:51:38.983858 | orchestrator | Wednesday 04 February 2026 00:50:19 +0000 (0:00:00.083) 0:01:29.820 **** 2026-02-04 00:51:38.983864 | orchestrator | 2026-02-04 00:51:38.983870 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:51:38.983876 | orchestrator | Wednesday 04 February 2026 00:50:19 +0000 (0:00:00.070) 0:01:29.891 **** 2026-02-04 00:51:38.983882 | orchestrator | 2026-02-04 00:51:38.983888 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:51:38.983894 | orchestrator | Wednesday 04 February 2026 00:50:19 +0000 (0:00:00.078) 0:01:29.970 **** 2026-02-04 00:51:38.983900 | orchestrator | 2026-02-04 00:51:38.983906 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-04 00:51:38.983912 | orchestrator | Wednesday 04 February 2026 00:50:19 +0000 (0:00:00.096) 0:01:30.066 **** 2026-02-04 00:51:38.983918 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:38.983923 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:38.983929 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:38.983935 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:38.983941 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:38.983947 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:38.983953 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:38.983959 | orchestrator | 2026-02-04 00:51:38.983965 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-04 00:51:38.983971 | orchestrator | Wednesday 04 February 2026 00:50:50 +0000 (0:00:30.613) 0:02:00.679 **** 2026-02-04 00:51:38.983977 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:38.983983 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:38.983989 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:38.983995 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:38.984001 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:38.984007 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:38.984013 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:38.984019 | orchestrator | 2026-02-04 00:51:38.984025 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-04 00:51:38.984031 | orchestrator | Wednesday 04 February 2026 00:51:22 +0000 (0:00:31.768) 0:02:32.448 **** 2026-02-04 00:51:38.984037 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:51:38.984043 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:51:38.984049 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:51:38.984055 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:51:38.984061 | orchestrator | ok: [testbed-manager] 2026-02-04 00:51:38.984067 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:51:38.984073 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:51:38.984079 | orchestrator | 2026-02-04 00:51:38.984085 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-04 00:51:38.984094 | orchestrator | Wednesday 04 February 2026 00:51:27 +0000 (0:00:04.871) 0:02:37.320 **** 2026-02-04 00:51:38.984100 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:51:38.984106 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:51:38.984112 | orchestrator | changed: [testbed-manager] 2026-02-04 00:51:38.984118 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:51:38.984124 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:51:38.984130 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:51:38.984139 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:51:38.984145 | orchestrator | 2026-02-04 00:51:38.984151 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:51:38.984157 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:51:38.984163 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:51:38.984172 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:51:38.984179 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:51:38.984185 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:51:38.984191 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:51:38.984197 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:51:38.984203 | orchestrator | 2026-02-04 00:51:38.984209 | orchestrator | 2026-02-04 00:51:38.984215 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:51:38.984221 | orchestrator | Wednesday 04 February 2026 00:51:37 +0000 (0:00:10.771) 0:02:48.091 **** 2026-02-04 00:51:38.984227 | orchestrator | =============================================================================== 2026-02-04 00:51:38.984233 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.77s 2026-02-04 00:51:38.984239 | orchestrator | common : Restart fluentd container ------------------------------------- 30.61s 2026-02-04 00:51:38.984244 | orchestrator | common : Copying over config.json files for services ------------------- 12.54s 2026-02-04 00:51:38.984250 | orchestrator | common : Restart cron container ---------------------------------------- 10.77s 2026-02-04 00:51:38.984256 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 7.99s 2026-02-04 00:51:38.984262 | orchestrator | common : Copying over cron logrotate config file ------------------------ 7.80s 2026-02-04 00:51:38.984268 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 7.58s 2026-02-04 00:51:38.984274 | orchestrator | common : Check common containers ---------------------------------------- 6.75s 2026-02-04 00:51:38.984280 | orchestrator | common : Ensuring config directories exist ------------------------------ 6.71s 2026-02-04 00:51:38.984286 | orchestrator | common : Initializing toolbox container using normal user --------------- 4.87s 2026-02-04 00:51:38.984292 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.27s 2026-02-04 00:51:38.984298 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.88s 2026-02-04 00:51:38.984304 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.60s 2026-02-04 00:51:38.984310 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.42s 2026-02-04 00:51:38.984316 | orchestrator | common : Find custom fluentd input config files ------------------------- 3.30s 2026-02-04 00:51:38.984325 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.16s 2026-02-04 00:51:38.984331 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.15s 2026-02-04 00:51:38.984337 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.14s 2026-02-04 00:51:38.984343 | orchestrator | common : Creating log volume -------------------------------------------- 2.13s 2026-02-04 00:51:38.984348 | orchestrator | common : include_tasks -------------------------------------------------- 1.97s 2026-02-04 00:51:38.984354 | orchestrator | 2026-02-04 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:42.038327 | orchestrator | 2026-02-04 00:51:42 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:51:42.039637 | orchestrator | 2026-02-04 00:51:42 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:51:42.039812 | orchestrator | 2026-02-04 00:51:42 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:42.041078 | orchestrator | 2026-02-04 00:51:42 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:42.042126 | orchestrator | 2026-02-04 00:51:42 | INFO  | Task 45d2c08c-48a9-4c47-a9b0-361cfedb17f1 is in state STARTED 2026-02-04 00:51:42.043183 | orchestrator | 2026-02-04 00:51:42 | INFO  | Task 37b7db19-2c02-419a-a144-115661d02e4d is in state STARTED 2026-02-04 00:51:42.043222 | orchestrator | 2026-02-04 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:45.094929 | orchestrator | 2026-02-04 00:51:45 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:51:45.096057 | orchestrator | 2026-02-04 00:51:45 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:51:45.096864 | orchestrator | 2026-02-04 00:51:45 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:45.098112 | orchestrator | 2026-02-04 00:51:45 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:45.101674 | orchestrator | 2026-02-04 00:51:45 | INFO  | Task 45d2c08c-48a9-4c47-a9b0-361cfedb17f1 is in state STARTED 2026-02-04 00:51:45.102809 | orchestrator | 2026-02-04 00:51:45 | INFO  | Task 37b7db19-2c02-419a-a144-115661d02e4d is in state STARTED 2026-02-04 00:51:45.102868 | orchestrator | 2026-02-04 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:48.148177 | orchestrator | 2026-02-04 00:51:48 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:51:48.148830 | orchestrator | 2026-02-04 00:51:48 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:51:48.150044 | orchestrator | 2026-02-04 00:51:48 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:48.151488 | orchestrator | 2026-02-04 00:51:48 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:48.152466 | orchestrator | 2026-02-04 00:51:48 | INFO  | Task 45d2c08c-48a9-4c47-a9b0-361cfedb17f1 is in state STARTED 2026-02-04 00:51:48.154165 | orchestrator | 2026-02-04 00:51:48 | INFO  | Task 37b7db19-2c02-419a-a144-115661d02e4d is in state STARTED 2026-02-04 00:51:48.154223 | orchestrator | 2026-02-04 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:51.190542 | orchestrator | 2026-02-04 00:51:51 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:51:51.191906 | orchestrator | 2026-02-04 00:51:51 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:51:51.195182 | orchestrator | 2026-02-04 00:51:51 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:51.196095 | orchestrator | 2026-02-04 00:51:51 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:51.196891 | orchestrator | 2026-02-04 00:51:51 | INFO  | Task 45d2c08c-48a9-4c47-a9b0-361cfedb17f1 is in state STARTED 2026-02-04 00:51:51.197987 | orchestrator | 2026-02-04 00:51:51 | INFO  | Task 37b7db19-2c02-419a-a144-115661d02e4d is in state STARTED 2026-02-04 00:51:51.198042 | orchestrator | 2026-02-04 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:54.253337 | orchestrator | 2026-02-04 00:51:54 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:51:54.255760 | orchestrator | 2026-02-04 00:51:54 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:51:54.259809 | orchestrator | 2026-02-04 00:51:54 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:54.263234 | orchestrator | 2026-02-04 00:51:54 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:54.264857 | orchestrator | 2026-02-04 00:51:54 | INFO  | Task 45d2c08c-48a9-4c47-a9b0-361cfedb17f1 is in state STARTED 2026-02-04 00:51:54.270286 | orchestrator | 2026-02-04 00:51:54 | INFO  | Task 37b7db19-2c02-419a-a144-115661d02e4d is in state STARTED 2026-02-04 00:51:54.270325 | orchestrator | 2026-02-04 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:57.370317 | orchestrator | 2026-02-04 00:51:57 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:51:57.370388 | orchestrator | 2026-02-04 00:51:57 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:51:57.370395 | orchestrator | 2026-02-04 00:51:57 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:51:57.370469 | orchestrator | 2026-02-04 00:51:57 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:51:57.370475 | orchestrator | 2026-02-04 00:51:57 | INFO  | Task 45d2c08c-48a9-4c47-a9b0-361cfedb17f1 is in state STARTED 2026-02-04 00:51:57.370480 | orchestrator | 2026-02-04 00:51:57 | INFO  | Task 37b7db19-2c02-419a-a144-115661d02e4d is in state STARTED 2026-02-04 00:51:57.370484 | orchestrator | 2026-02-04 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:00.417136 | orchestrator | 2026-02-04 00:52:00 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:00.417223 | orchestrator | 2026-02-04 00:52:00 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:00.417235 | orchestrator | 2026-02-04 00:52:00 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:00.417244 | orchestrator | 2026-02-04 00:52:00 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:00.417250 | orchestrator | 2026-02-04 00:52:00 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:00.417256 | orchestrator | 2026-02-04 00:52:00 | INFO  | Task 45d2c08c-48a9-4c47-a9b0-361cfedb17f1 is in state STARTED 2026-02-04 00:52:00.417262 | orchestrator | 2026-02-04 00:52:00 | INFO  | Task 37b7db19-2c02-419a-a144-115661d02e4d is in state SUCCESS 2026-02-04 00:52:00.417268 | orchestrator | 2026-02-04 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:03.624391 | orchestrator | 2026-02-04 00:52:03 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:03.624562 | orchestrator | 2026-02-04 00:52:03 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:03.624595 | orchestrator | 2026-02-04 00:52:03 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:03.624603 | orchestrator | 2026-02-04 00:52:03 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:03.627871 | orchestrator | 2026-02-04 00:52:03 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:03.628790 | orchestrator | 2026-02-04 00:52:03 | INFO  | Task 45d2c08c-48a9-4c47-a9b0-361cfedb17f1 is in state STARTED 2026-02-04 00:52:03.628853 | orchestrator | 2026-02-04 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:06.690205 | orchestrator | 2026-02-04 00:52:06 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:06.690673 | orchestrator | 2026-02-04 00:52:06 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:06.692725 | orchestrator | 2026-02-04 00:52:06 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:06.693772 | orchestrator | 2026-02-04 00:52:06 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:06.694889 | orchestrator | 2026-02-04 00:52:06 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:06.696040 | orchestrator | 2026-02-04 00:52:06 | INFO  | Task 45d2c08c-48a9-4c47-a9b0-361cfedb17f1 is in state STARTED 2026-02-04 00:52:06.696204 | orchestrator | 2026-02-04 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:09.753013 | orchestrator | 2026-02-04 00:52:09 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:09.754576 | orchestrator | 2026-02-04 00:52:09 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:09.756372 | orchestrator | 2026-02-04 00:52:09 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:09.758482 | orchestrator | 2026-02-04 00:52:09 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:09.759475 | orchestrator | 2026-02-04 00:52:09 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:09.761318 | orchestrator | 2026-02-04 00:52:09 | INFO  | Task 45d2c08c-48a9-4c47-a9b0-361cfedb17f1 is in state STARTED 2026-02-04 00:52:09.761375 | orchestrator | 2026-02-04 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:12.834276 | orchestrator | 2026-02-04 00:52:12 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:12.834360 | orchestrator | 2026-02-04 00:52:12 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:12.843021 | orchestrator | 2026-02-04 00:52:12 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:12.843109 | orchestrator | 2026-02-04 00:52:12 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:12.843118 | orchestrator | 2026-02-04 00:52:12 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:12.843123 | orchestrator | 2026-02-04 00:52:12 | INFO  | Task 45d2c08c-48a9-4c47-a9b0-361cfedb17f1 is in state SUCCESS 2026-02-04 00:52:12.844381 | orchestrator | 2026-02-04 00:52:12.844437 | orchestrator | 2026-02-04 00:52:12.844446 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:52:12.844454 | orchestrator | 2026-02-04 00:52:12.844476 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:52:12.844483 | orchestrator | Wednesday 04 February 2026 00:51:44 +0000 (0:00:00.324) 0:00:00.324 **** 2026-02-04 00:52:12.844511 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:12.844521 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:12.844527 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:12.844534 | orchestrator | 2026-02-04 00:52:12.844540 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:52:12.844547 | orchestrator | Wednesday 04 February 2026 00:51:45 +0000 (0:00:00.338) 0:00:00.662 **** 2026-02-04 00:52:12.844554 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-04 00:52:12.844561 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-04 00:52:12.844567 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-04 00:52:12.844573 | orchestrator | 2026-02-04 00:52:12.844579 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-04 00:52:12.844586 | orchestrator | 2026-02-04 00:52:12.844592 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-04 00:52:12.844598 | orchestrator | Wednesday 04 February 2026 00:51:46 +0000 (0:00:00.976) 0:00:01.639 **** 2026-02-04 00:52:12.844605 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:12.844613 | orchestrator | 2026-02-04 00:52:12.844619 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-04 00:52:12.844626 | orchestrator | Wednesday 04 February 2026 00:51:46 +0000 (0:00:00.706) 0:00:02.346 **** 2026-02-04 00:52:12.844632 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-04 00:52:12.844639 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-04 00:52:12.844645 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-04 00:52:12.844652 | orchestrator | 2026-02-04 00:52:12.844658 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-04 00:52:12.844664 | orchestrator | Wednesday 04 February 2026 00:51:47 +0000 (0:00:01.036) 0:00:03.382 **** 2026-02-04 00:52:12.844670 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-04 00:52:12.844676 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-04 00:52:12.844683 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-04 00:52:12.844689 | orchestrator | 2026-02-04 00:52:12.844695 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-04 00:52:12.844701 | orchestrator | Wednesday 04 February 2026 00:51:50 +0000 (0:00:02.508) 0:00:05.890 **** 2026-02-04 00:52:12.844708 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:12.844714 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:12.844721 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:12.844727 | orchestrator | 2026-02-04 00:52:12.844733 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-04 00:52:12.844739 | orchestrator | Wednesday 04 February 2026 00:51:52 +0000 (0:00:02.386) 0:00:08.276 **** 2026-02-04 00:52:12.844745 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:12.844751 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:12.844758 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:12.844764 | orchestrator | 2026-02-04 00:52:12.844771 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:52:12.844778 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:52:12.844785 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:52:12.844791 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:52:12.844798 | orchestrator | 2026-02-04 00:52:12.844804 | orchestrator | 2026-02-04 00:52:12.844810 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:52:12.844822 | orchestrator | Wednesday 04 February 2026 00:51:56 +0000 (0:00:03.852) 0:00:12.129 **** 2026-02-04 00:52:12.844828 | orchestrator | =============================================================================== 2026-02-04 00:52:12.844834 | orchestrator | memcached : Restart memcached container --------------------------------- 3.85s 2026-02-04 00:52:12.844840 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.51s 2026-02-04 00:52:12.844846 | orchestrator | memcached : Check memcached container ----------------------------------- 2.39s 2026-02-04 00:52:12.844853 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.04s 2026-02-04 00:52:12.844859 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2026-02-04 00:52:12.844865 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.71s 2026-02-04 00:52:12.844871 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-04 00:52:12.844877 | orchestrator | 2026-02-04 00:52:12.844883 | orchestrator | 2026-02-04 00:52:12.844889 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:52:12.844894 | orchestrator | 2026-02-04 00:52:12.844900 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:52:12.844906 | orchestrator | Wednesday 04 February 2026 00:51:45 +0000 (0:00:00.372) 0:00:00.372 **** 2026-02-04 00:52:12.844911 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:12.844916 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:12.844922 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:12.844928 | orchestrator | 2026-02-04 00:52:12.844934 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:52:12.844951 | orchestrator | Wednesday 04 February 2026 00:51:45 +0000 (0:00:00.661) 0:00:01.034 **** 2026-02-04 00:52:12.844957 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-04 00:52:12.844964 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-04 00:52:12.844970 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-04 00:52:12.844976 | orchestrator | 2026-02-04 00:52:12.844995 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-04 00:52:12.845009 | orchestrator | 2026-02-04 00:52:12.845015 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-04 00:52:12.845021 | orchestrator | Wednesday 04 February 2026 00:51:46 +0000 (0:00:00.615) 0:00:01.649 **** 2026-02-04 00:52:12.845027 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:12.845034 | orchestrator | 2026-02-04 00:52:12.845040 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-04 00:52:12.845046 | orchestrator | Wednesday 04 February 2026 00:51:47 +0000 (0:00:00.771) 0:00:02.420 **** 2026-02-04 00:52:12.845054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845126 | orchestrator | 2026-02-04 00:52:12.845133 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-04 00:52:12.845139 | orchestrator | Wednesday 04 February 2026 00:51:48 +0000 (0:00:01.346) 0:00:03.767 **** 2026-02-04 00:52:12.845145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845197 | orchestrator | 2026-02-04 00:52:12.845203 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-04 00:52:12.845210 | orchestrator | Wednesday 04 February 2026 00:51:51 +0000 (0:00:03.068) 0:00:06.836 **** 2026-02-04 00:52:12.845216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845261 | orchestrator | 2026-02-04 00:52:12.845271 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-04 00:52:12.845277 | orchestrator | Wednesday 04 February 2026 00:51:54 +0000 (0:00:03.176) 0:00:10.012 **** 2026-02-04 00:52:12.845287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:52:12.845320 | orchestrator | 2026-02-04 00:52:12.845324 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 00:52:12.845328 | orchestrator | Wednesday 04 February 2026 00:51:57 +0000 (0:00:02.660) 0:00:12.672 **** 2026-02-04 00:52:12.845332 | orchestrator | 2026-02-04 00:52:12.845335 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 00:52:12.845342 | orchestrator | Wednesday 04 February 2026 00:51:57 +0000 (0:00:00.216) 0:00:12.889 **** 2026-02-04 00:52:12.845346 | orchestrator | 2026-02-04 00:52:12.845362 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 00:52:12.845368 | orchestrator | Wednesday 04 February 2026 00:51:57 +0000 (0:00:00.183) 0:00:13.072 **** 2026-02-04 00:52:12.845395 | orchestrator | 2026-02-04 00:52:12.845409 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-04 00:52:12.845424 | orchestrator | Wednesday 04 February 2026 00:51:58 +0000 (0:00:00.256) 0:00:13.329 **** 2026-02-04 00:52:12.845430 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:12.845437 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:12.845442 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:12.845449 | orchestrator | 2026-02-04 00:52:12.845454 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-04 00:52:12.845466 | orchestrator | Wednesday 04 February 2026 00:52:03 +0000 (0:00:05.759) 0:00:19.088 **** 2026-02-04 00:52:12.845472 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:12.845478 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:12.845484 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:12.845489 | orchestrator | 2026-02-04 00:52:12.845495 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:52:12.845501 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:52:12.845508 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:52:12.845514 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:52:12.845520 | orchestrator | 2026-02-04 00:52:12.845526 | orchestrator | 2026-02-04 00:52:12.845532 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:52:12.845538 | orchestrator | Wednesday 04 February 2026 00:52:10 +0000 (0:00:06.140) 0:00:25.228 **** 2026-02-04 00:52:12.845544 | orchestrator | =============================================================================== 2026-02-04 00:52:12.845551 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 6.14s 2026-02-04 00:52:12.845558 | orchestrator | redis : Restart redis container ----------------------------------------- 5.76s 2026-02-04 00:52:12.845562 | orchestrator | redis : Copying over redis config files --------------------------------- 3.18s 2026-02-04 00:52:12.845566 | orchestrator | redis : Copying over default config.json files -------------------------- 3.07s 2026-02-04 00:52:12.845570 | orchestrator | redis : Check redis containers ------------------------------------------ 2.66s 2026-02-04 00:52:12.845573 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.35s 2026-02-04 00:52:12.845577 | orchestrator | redis : include_tasks --------------------------------------------------- 0.77s 2026-02-04 00:52:12.845581 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2026-02-04 00:52:12.845585 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.66s 2026-02-04 00:52:12.845588 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-02-04 00:52:12.845592 | orchestrator | 2026-02-04 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:15.921758 | orchestrator | 2026-02-04 00:52:15 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:15.921853 | orchestrator | 2026-02-04 00:52:15 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:15.921866 | orchestrator | 2026-02-04 00:52:15 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:15.921874 | orchestrator | 2026-02-04 00:52:15 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:15.921882 | orchestrator | 2026-02-04 00:52:15 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:15.921890 | orchestrator | 2026-02-04 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:18.951011 | orchestrator | 2026-02-04 00:52:18 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:18.951500 | orchestrator | 2026-02-04 00:52:18 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:18.953339 | orchestrator | 2026-02-04 00:52:18 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:18.955372 | orchestrator | 2026-02-04 00:52:18 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:18.956079 | orchestrator | 2026-02-04 00:52:18 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:18.956116 | orchestrator | 2026-02-04 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:22.071800 | orchestrator | 2026-02-04 00:52:22 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:22.075961 | orchestrator | 2026-02-04 00:52:22 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:22.079883 | orchestrator | 2026-02-04 00:52:22 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:22.083666 | orchestrator | 2026-02-04 00:52:22 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:22.083740 | orchestrator | 2026-02-04 00:52:22 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:22.083749 | orchestrator | 2026-02-04 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:25.151460 | orchestrator | 2026-02-04 00:52:25 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:25.152343 | orchestrator | 2026-02-04 00:52:25 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:25.156816 | orchestrator | 2026-02-04 00:52:25 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:25.158327 | orchestrator | 2026-02-04 00:52:25 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:25.159330 | orchestrator | 2026-02-04 00:52:25 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:25.159364 | orchestrator | 2026-02-04 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:28.388543 | orchestrator | 2026-02-04 00:52:28 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:28.403866 | orchestrator | 2026-02-04 00:52:28 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:28.404574 | orchestrator | 2026-02-04 00:52:28 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:28.405756 | orchestrator | 2026-02-04 00:52:28 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:28.407144 | orchestrator | 2026-02-04 00:52:28 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:28.407180 | orchestrator | 2026-02-04 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:31.538301 | orchestrator | 2026-02-04 00:52:31 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:31.542680 | orchestrator | 2026-02-04 00:52:31 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:31.544273 | orchestrator | 2026-02-04 00:52:31 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:31.548445 | orchestrator | 2026-02-04 00:52:31 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:31.551192 | orchestrator | 2026-02-04 00:52:31 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:31.551252 | orchestrator | 2026-02-04 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:34.677548 | orchestrator | 2026-02-04 00:52:34 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:34.677650 | orchestrator | 2026-02-04 00:52:34 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:34.677695 | orchestrator | 2026-02-04 00:52:34 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:34.677709 | orchestrator | 2026-02-04 00:52:34 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:34.677726 | orchestrator | 2026-02-04 00:52:34 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:34.677750 | orchestrator | 2026-02-04 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:37.862285 | orchestrator | 2026-02-04 00:52:37 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:37.862399 | orchestrator | 2026-02-04 00:52:37 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:37.863953 | orchestrator | 2026-02-04 00:52:37 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:37.864714 | orchestrator | 2026-02-04 00:52:37 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:37.866095 | orchestrator | 2026-02-04 00:52:37 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:37.866239 | orchestrator | 2026-02-04 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:40.910155 | orchestrator | 2026-02-04 00:52:40 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:40.910250 | orchestrator | 2026-02-04 00:52:40 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:40.910648 | orchestrator | 2026-02-04 00:52:40 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:40.913196 | orchestrator | 2026-02-04 00:52:40 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:40.913840 | orchestrator | 2026-02-04 00:52:40 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:40.913869 | orchestrator | 2026-02-04 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:43.963460 | orchestrator | 2026-02-04 00:52:43 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:43.965473 | orchestrator | 2026-02-04 00:52:43 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:43.967771 | orchestrator | 2026-02-04 00:52:43 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:43.968703 | orchestrator | 2026-02-04 00:52:43 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:43.971168 | orchestrator | 2026-02-04 00:52:43 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:43.971239 | orchestrator | 2026-02-04 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:47.053985 | orchestrator | 2026-02-04 00:52:47 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:47.054116 | orchestrator | 2026-02-04 00:52:47 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:47.054128 | orchestrator | 2026-02-04 00:52:47 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:47.054135 | orchestrator | 2026-02-04 00:52:47 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:47.054142 | orchestrator | 2026-02-04 00:52:47 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:47.054150 | orchestrator | 2026-02-04 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:50.087571 | orchestrator | 2026-02-04 00:52:50 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:50.090733 | orchestrator | 2026-02-04 00:52:50 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:50.091940 | orchestrator | 2026-02-04 00:52:50 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:50.097079 | orchestrator | 2026-02-04 00:52:50 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:50.098137 | orchestrator | 2026-02-04 00:52:50 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:50.098215 | orchestrator | 2026-02-04 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:53.273733 | orchestrator | 2026-02-04 00:52:53 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:53.273815 | orchestrator | 2026-02-04 00:52:53 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:53.273874 | orchestrator | 2026-02-04 00:52:53 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:53.274754 | orchestrator | 2026-02-04 00:52:53 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:53.275457 | orchestrator | 2026-02-04 00:52:53 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:53.275515 | orchestrator | 2026-02-04 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:56.345880 | orchestrator | 2026-02-04 00:52:56 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:56.346544 | orchestrator | 2026-02-04 00:52:56 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:56.347662 | orchestrator | 2026-02-04 00:52:56 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:56.348605 | orchestrator | 2026-02-04 00:52:56 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:56.349571 | orchestrator | 2026-02-04 00:52:56 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:56.349605 | orchestrator | 2026-02-04 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:59.442206 | orchestrator | 2026-02-04 00:52:59 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:52:59.445601 | orchestrator | 2026-02-04 00:52:59 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:52:59.446384 | orchestrator | 2026-02-04 00:52:59 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:52:59.448662 | orchestrator | 2026-02-04 00:52:59 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:52:59.450192 | orchestrator | 2026-02-04 00:52:59 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:52:59.450251 | orchestrator | 2026-02-04 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:02.496000 | orchestrator | 2026-02-04 00:53:02 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:02.496063 | orchestrator | 2026-02-04 00:53:02 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:53:02.496790 | orchestrator | 2026-02-04 00:53:02 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:02.497915 | orchestrator | 2026-02-04 00:53:02 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:02.499480 | orchestrator | 2026-02-04 00:53:02 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:02.499531 | orchestrator | 2026-02-04 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:05.584608 | orchestrator | 2026-02-04 00:53:05 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:05.584653 | orchestrator | 2026-02-04 00:53:05 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state STARTED 2026-02-04 00:53:05.584659 | orchestrator | 2026-02-04 00:53:05 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:05.584664 | orchestrator | 2026-02-04 00:53:05 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:05.584668 | orchestrator | 2026-02-04 00:53:05 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:05.584672 | orchestrator | 2026-02-04 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:08.610638 | orchestrator | 2026-02-04 00:53:08 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:08.613449 | orchestrator | 2026-02-04 00:53:08 | INFO  | Task c9ba018a-1caa-4480-8795-5d07c11e697b is in state SUCCESS 2026-02-04 00:53:08.614777 | orchestrator | 2026-02-04 00:53:08.614817 | orchestrator | 2026-02-04 00:53:08.614822 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:53:08.614828 | orchestrator | 2026-02-04 00:53:08.614832 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:53:08.614836 | orchestrator | Wednesday 04 February 2026 00:51:44 +0000 (0:00:00.292) 0:00:00.292 **** 2026-02-04 00:53:08.614840 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:53:08.614845 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:53:08.614849 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:53:08.614853 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:53:08.614857 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:53:08.614861 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:53:08.614865 | orchestrator | 2026-02-04 00:53:08.614869 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:53:08.614873 | orchestrator | Wednesday 04 February 2026 00:51:45 +0000 (0:00:01.010) 0:00:01.302 **** 2026-02-04 00:53:08.614877 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:53:08.614881 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:53:08.614885 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:53:08.614888 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:53:08.614892 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:53:08.614896 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:53:08.614900 | orchestrator | 2026-02-04 00:53:08.614904 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-04 00:53:08.614908 | orchestrator | 2026-02-04 00:53:08.614911 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-04 00:53:08.614915 | orchestrator | Wednesday 04 February 2026 00:51:46 +0000 (0:00:01.270) 0:00:02.573 **** 2026-02-04 00:53:08.614919 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:53:08.614924 | orchestrator | 2026-02-04 00:53:08.614928 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-04 00:53:08.614932 | orchestrator | Wednesday 04 February 2026 00:51:48 +0000 (0:00:01.843) 0:00:04.416 **** 2026-02-04 00:53:08.614936 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-04 00:53:08.614940 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-04 00:53:08.614955 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-04 00:53:08.614960 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-04 00:53:08.614970 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-04 00:53:08.614974 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-04 00:53:08.614978 | orchestrator | 2026-02-04 00:53:08.614982 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-04 00:53:08.614986 | orchestrator | Wednesday 04 February 2026 00:51:50 +0000 (0:00:01.844) 0:00:06.260 **** 2026-02-04 00:53:08.614989 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-04 00:53:08.614993 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-04 00:53:08.614997 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-04 00:53:08.615001 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-04 00:53:08.615005 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-04 00:53:08.615008 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-04 00:53:08.615012 | orchestrator | 2026-02-04 00:53:08.615016 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-04 00:53:08.615020 | orchestrator | Wednesday 04 February 2026 00:51:52 +0000 (0:00:02.143) 0:00:08.404 **** 2026-02-04 00:53:08.615024 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-04 00:53:08.615028 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:53:08.615032 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-04 00:53:08.615036 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:53:08.615039 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-04 00:53:08.615043 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:53:08.615047 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-04 00:53:08.615051 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:53:08.615055 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-04 00:53:08.615059 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:53:08.615062 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-04 00:53:08.615066 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:53:08.615070 | orchestrator | 2026-02-04 00:53:08.615074 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-04 00:53:08.615078 | orchestrator | Wednesday 04 February 2026 00:51:54 +0000 (0:00:01.981) 0:00:10.385 **** 2026-02-04 00:53:08.615082 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:53:08.615085 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:53:08.615090 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:53:08.615094 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:53:08.615098 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:53:08.615101 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:53:08.615105 | orchestrator | 2026-02-04 00:53:08.615109 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-04 00:53:08.615113 | orchestrator | Wednesday 04 February 2026 00:51:55 +0000 (0:00:01.670) 0:00:12.056 **** 2026-02-04 00:53:08.615126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615139 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615193 | orchestrator | 2026-02-04 00:53:08.615197 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-04 00:53:08.615201 | orchestrator | Wednesday 04 February 2026 00:51:59 +0000 (0:00:03.431) 0:00:15.487 **** 2026-02-04 00:53:08.615205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615232 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615274 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615389 | orchestrator | 2026-02-04 00:53:08.615430 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-04 00:53:08.615436 | orchestrator | Wednesday 04 February 2026 00:52:04 +0000 (0:00:05.102) 0:00:20.589 **** 2026-02-04 00:53:08.615441 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:53:08.615446 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:53:08.615451 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:53:08.615455 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:53:08.615460 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:53:08.615464 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:53:08.615468 | orchestrator | 2026-02-04 00:53:08.615473 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-04 00:53:08.615477 | orchestrator | Wednesday 04 February 2026 00:52:07 +0000 (0:00:02.599) 0:00:23.189 **** 2026-02-04 00:53:08.615482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:53:08.615567 | orchestrator | 2026-02-04 00:53:08.615570 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:53:08.615574 | orchestrator | Wednesday 04 February 2026 00:52:11 +0000 (0:00:04.558) 0:00:27.747 **** 2026-02-04 00:53:08.615578 | orchestrator | 2026-02-04 00:53:08.615582 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:53:08.615586 | orchestrator | Wednesday 04 February 2026 00:52:12 +0000 (0:00:00.458) 0:00:28.206 **** 2026-02-04 00:53:08.615590 | orchestrator | 2026-02-04 00:53:08.615594 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:53:08.615597 | orchestrator | Wednesday 04 February 2026 00:52:12 +0000 (0:00:00.240) 0:00:28.446 **** 2026-02-04 00:53:08.615601 | orchestrator | 2026-02-04 00:53:08.615605 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:53:08.615609 | orchestrator | Wednesday 04 February 2026 00:52:12 +0000 (0:00:00.269) 0:00:28.715 **** 2026-02-04 00:53:08.615613 | orchestrator | 2026-02-04 00:53:08.615617 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:53:08.615623 | orchestrator | Wednesday 04 February 2026 00:52:12 +0000 (0:00:00.196) 0:00:28.912 **** 2026-02-04 00:53:08.615627 | orchestrator | 2026-02-04 00:53:08.615630 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:53:08.615634 | orchestrator | Wednesday 04 February 2026 00:52:13 +0000 (0:00:00.258) 0:00:29.171 **** 2026-02-04 00:53:08.615638 | orchestrator | 2026-02-04 00:53:08.615642 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-04 00:53:08.615646 | orchestrator | Wednesday 04 February 2026 00:52:13 +0000 (0:00:00.522) 0:00:29.694 **** 2026-02-04 00:53:08.615649 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:53:08.615653 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:53:08.615657 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:53:08.615661 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:53:08.615665 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:53:08.615675 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:53:08.615679 | orchestrator | 2026-02-04 00:53:08.615683 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-04 00:53:08.615687 | orchestrator | Wednesday 04 February 2026 00:52:23 +0000 (0:00:10.277) 0:00:39.971 **** 2026-02-04 00:53:08.615690 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:53:08.615695 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:53:08.615698 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:53:08.615702 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:53:08.615706 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:53:08.615710 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:53:08.615714 | orchestrator | 2026-02-04 00:53:08.615718 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-04 00:53:08.615722 | orchestrator | Wednesday 04 February 2026 00:52:25 +0000 (0:00:01.658) 0:00:41.630 **** 2026-02-04 00:53:08.615726 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:53:08.615730 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:53:08.615733 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:53:08.615737 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:53:08.615741 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:53:08.615745 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:53:08.615749 | orchestrator | 2026-02-04 00:53:08.615753 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-04 00:53:08.615757 | orchestrator | Wednesday 04 February 2026 00:52:35 +0000 (0:00:09.652) 0:00:51.283 **** 2026-02-04 00:53:08.615761 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-04 00:53:08.615765 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-04 00:53:08.615768 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-04 00:53:08.615772 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-04 00:53:08.615776 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-04 00:53:08.615782 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-04 00:53:08.615786 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-04 00:53:08.615790 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-04 00:53:08.615794 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-04 00:53:08.615798 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-04 00:53:08.615802 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-04 00:53:08.615806 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-04 00:53:08.615809 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:53:08.615813 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:53:08.615830 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:53:08.615835 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:53:08.615839 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:53:08.615846 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:53:08.615850 | orchestrator | 2026-02-04 00:53:08.615854 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-04 00:53:08.615858 | orchestrator | Wednesday 04 February 2026 00:52:45 +0000 (0:00:10.152) 0:01:01.435 **** 2026-02-04 00:53:08.615862 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-04 00:53:08.615866 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:53:08.615869 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-04 00:53:08.615873 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:53:08.615877 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-04 00:53:08.615881 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:53:08.615887 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-04 00:53:08.615891 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-04 00:53:08.615895 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-04 00:53:08.615899 | orchestrator | 2026-02-04 00:53:08.615903 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-04 00:53:08.615907 | orchestrator | Wednesday 04 February 2026 00:52:49 +0000 (0:00:04.283) 0:01:05.719 **** 2026-02-04 00:53:08.615911 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-04 00:53:08.615914 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:53:08.615918 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-04 00:53:08.615922 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:53:08.615926 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-04 00:53:08.615930 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:53:08.615934 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-04 00:53:08.615938 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-04 00:53:08.615941 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-04 00:53:08.615945 | orchestrator | 2026-02-04 00:53:08.615949 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-04 00:53:08.615953 | orchestrator | Wednesday 04 February 2026 00:52:54 +0000 (0:00:05.032) 0:01:10.751 **** 2026-02-04 00:53:08.615957 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:53:08.615961 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:53:08.615966 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:53:08.615972 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:53:08.615981 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:53:08.615990 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:53:08.616014 | orchestrator | 2026-02-04 00:53:08.616021 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:53:08.616027 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:53:08.616034 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:53:08.616040 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:53:08.616046 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 00:53:08.616052 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 00:53:08.616062 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 00:53:08.616072 | orchestrator | 2026-02-04 00:53:08.616078 | orchestrator | 2026-02-04 00:53:08.616084 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:53:08.616091 | orchestrator | Wednesday 04 February 2026 00:53:04 +0000 (0:00:10.204) 0:01:20.955 **** 2026-02-04 00:53:08.616097 | orchestrator | =============================================================================== 2026-02-04 00:53:08.616103 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.86s 2026-02-04 00:53:08.616109 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.28s 2026-02-04 00:53:08.616116 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 10.15s 2026-02-04 00:53:08.616122 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.10s 2026-02-04 00:53:08.616128 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.03s 2026-02-04 00:53:08.616135 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.56s 2026-02-04 00:53:08.616141 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 4.28s 2026-02-04 00:53:08.616147 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.43s 2026-02-04 00:53:08.616153 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.60s 2026-02-04 00:53:08.616159 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.14s 2026-02-04 00:53:08.616163 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.98s 2026-02-04 00:53:08.616166 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.95s 2026-02-04 00:53:08.616170 | orchestrator | module-load : Load modules ---------------------------------------------- 1.84s 2026-02-04 00:53:08.616176 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.84s 2026-02-04 00:53:08.616182 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.67s 2026-02-04 00:53:08.616188 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.66s 2026-02-04 00:53:08.616195 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.27s 2026-02-04 00:53:08.616201 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.01s 2026-02-04 00:53:08.616663 | orchestrator | 2026-02-04 00:53:08 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:08.620006 | orchestrator | 2026-02-04 00:53:08 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:08.620494 | orchestrator | 2026-02-04 00:53:08 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:08.622050 | orchestrator | 2026-02-04 00:53:08 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:08.622078 | orchestrator | 2026-02-04 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:11.661486 | orchestrator | 2026-02-04 00:53:11 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:11.661541 | orchestrator | 2026-02-04 00:53:11 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:11.661547 | orchestrator | 2026-02-04 00:53:11 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:11.661552 | orchestrator | 2026-02-04 00:53:11 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:11.661556 | orchestrator | 2026-02-04 00:53:11 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:11.661590 | orchestrator | 2026-02-04 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:14.737899 | orchestrator | 2026-02-04 00:53:14 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:14.738879 | orchestrator | 2026-02-04 00:53:14 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:14.740096 | orchestrator | 2026-02-04 00:53:14 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:14.742094 | orchestrator | 2026-02-04 00:53:14 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:14.743113 | orchestrator | 2026-02-04 00:53:14 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:14.743131 | orchestrator | 2026-02-04 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:17.790516 | orchestrator | 2026-02-04 00:53:17 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:17.797585 | orchestrator | 2026-02-04 00:53:17 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:17.799673 | orchestrator | 2026-02-04 00:53:17 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:17.801495 | orchestrator | 2026-02-04 00:53:17 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:17.802983 | orchestrator | 2026-02-04 00:53:17 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:17.803096 | orchestrator | 2026-02-04 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:20.862970 | orchestrator | 2026-02-04 00:53:20 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:20.864102 | orchestrator | 2026-02-04 00:53:20 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:20.865235 | orchestrator | 2026-02-04 00:53:20 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:20.866480 | orchestrator | 2026-02-04 00:53:20 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:20.867758 | orchestrator | 2026-02-04 00:53:20 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:20.867918 | orchestrator | 2026-02-04 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:23.905633 | orchestrator | 2026-02-04 00:53:23 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:23.906451 | orchestrator | 2026-02-04 00:53:23 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:23.907580 | orchestrator | 2026-02-04 00:53:23 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:23.908910 | orchestrator | 2026-02-04 00:53:23 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:23.909649 | orchestrator | 2026-02-04 00:53:23 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:23.909694 | orchestrator | 2026-02-04 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:27.126443 | orchestrator | 2026-02-04 00:53:27 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:27.126548 | orchestrator | 2026-02-04 00:53:27 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:27.126559 | orchestrator | 2026-02-04 00:53:27 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:27.126567 | orchestrator | 2026-02-04 00:53:27 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:27.126573 | orchestrator | 2026-02-04 00:53:27 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:27.126580 | orchestrator | 2026-02-04 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:30.103114 | orchestrator | 2026-02-04 00:53:30 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:30.104346 | orchestrator | 2026-02-04 00:53:30 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:30.105518 | orchestrator | 2026-02-04 00:53:30 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:30.107528 | orchestrator | 2026-02-04 00:53:30 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:30.108531 | orchestrator | 2026-02-04 00:53:30 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:30.108717 | orchestrator | 2026-02-04 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:33.148931 | orchestrator | 2026-02-04 00:53:33 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:33.150541 | orchestrator | 2026-02-04 00:53:33 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:33.153624 | orchestrator | 2026-02-04 00:53:33 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:33.155794 | orchestrator | 2026-02-04 00:53:33 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:33.158035 | orchestrator | 2026-02-04 00:53:33 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:33.158086 | orchestrator | 2026-02-04 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:36.214636 | orchestrator | 2026-02-04 00:53:36 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:36.215948 | orchestrator | 2026-02-04 00:53:36 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:36.217434 | orchestrator | 2026-02-04 00:53:36 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:36.218528 | orchestrator | 2026-02-04 00:53:36 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:36.220829 | orchestrator | 2026-02-04 00:53:36 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:36.220893 | orchestrator | 2026-02-04 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:39.268928 | orchestrator | 2026-02-04 00:53:39 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:39.271189 | orchestrator | 2026-02-04 00:53:39 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:39.272867 | orchestrator | 2026-02-04 00:53:39 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:39.275057 | orchestrator | 2026-02-04 00:53:39 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:39.276603 | orchestrator | 2026-02-04 00:53:39 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:39.276894 | orchestrator | 2026-02-04 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:42.333545 | orchestrator | 2026-02-04 00:53:42 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:42.338153 | orchestrator | 2026-02-04 00:53:42 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:42.340681 | orchestrator | 2026-02-04 00:53:42 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:42.342286 | orchestrator | 2026-02-04 00:53:42 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:42.344046 | orchestrator | 2026-02-04 00:53:42 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:42.344092 | orchestrator | 2026-02-04 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:45.426782 | orchestrator | 2026-02-04 00:53:45 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:45.428602 | orchestrator | 2026-02-04 00:53:45 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:45.431841 | orchestrator | 2026-02-04 00:53:45 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:45.432651 | orchestrator | 2026-02-04 00:53:45 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:45.433520 | orchestrator | 2026-02-04 00:53:45 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:45.433572 | orchestrator | 2026-02-04 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:48.480900 | orchestrator | 2026-02-04 00:53:48 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:48.482394 | orchestrator | 2026-02-04 00:53:48 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:48.483886 | orchestrator | 2026-02-04 00:53:48 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:48.485235 | orchestrator | 2026-02-04 00:53:48 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:48.486824 | orchestrator | 2026-02-04 00:53:48 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:48.487234 | orchestrator | 2026-02-04 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:51.529344 | orchestrator | 2026-02-04 00:53:51 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:51.530091 | orchestrator | 2026-02-04 00:53:51 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:51.531100 | orchestrator | 2026-02-04 00:53:51 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:51.532926 | orchestrator | 2026-02-04 00:53:51 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:51.533880 | orchestrator | 2026-02-04 00:53:51 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:51.533908 | orchestrator | 2026-02-04 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:54.584007 | orchestrator | 2026-02-04 00:53:54 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:54.589480 | orchestrator | 2026-02-04 00:53:54 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:54.595958 | orchestrator | 2026-02-04 00:53:54 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:54.596987 | orchestrator | 2026-02-04 00:53:54 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:54.602865 | orchestrator | 2026-02-04 00:53:54 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:54.605142 | orchestrator | 2026-02-04 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:57.661297 | orchestrator | 2026-02-04 00:53:57 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:53:57.664880 | orchestrator | 2026-02-04 00:53:57 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:53:57.665729 | orchestrator | 2026-02-04 00:53:57 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:53:57.670617 | orchestrator | 2026-02-04 00:53:57 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:53:57.673165 | orchestrator | 2026-02-04 00:53:57 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:53:57.674472 | orchestrator | 2026-02-04 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:00.774926 | orchestrator | 2026-02-04 00:54:00 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:00.776070 | orchestrator | 2026-02-04 00:54:00 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:54:00.777165 | orchestrator | 2026-02-04 00:54:00 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:00.778308 | orchestrator | 2026-02-04 00:54:00 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:00.780619 | orchestrator | 2026-02-04 00:54:00 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:00.780664 | orchestrator | 2026-02-04 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:04.018475 | orchestrator | 2026-02-04 00:54:04 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:04.019405 | orchestrator | 2026-02-04 00:54:04 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:54:04.022919 | orchestrator | 2026-02-04 00:54:04 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:04.022985 | orchestrator | 2026-02-04 00:54:04 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:04.025050 | orchestrator | 2026-02-04 00:54:04 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:04.025118 | orchestrator | 2026-02-04 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:07.133759 | orchestrator | 2026-02-04 00:54:07 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:07.161149 | orchestrator | 2026-02-04 00:54:07 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:54:07.161210 | orchestrator | 2026-02-04 00:54:07 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:07.161220 | orchestrator | 2026-02-04 00:54:07 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:07.184934 | orchestrator | 2026-02-04 00:54:07 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:07.184993 | orchestrator | 2026-02-04 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:10.375313 | orchestrator | 2026-02-04 00:54:10 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:10.375373 | orchestrator | 2026-02-04 00:54:10 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:54:10.375381 | orchestrator | 2026-02-04 00:54:10 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:10.375387 | orchestrator | 2026-02-04 00:54:10 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:10.375394 | orchestrator | 2026-02-04 00:54:10 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:10.375401 | orchestrator | 2026-02-04 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:13.401737 | orchestrator | 2026-02-04 00:54:13 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:13.402399 | orchestrator | 2026-02-04 00:54:13 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:54:13.403101 | orchestrator | 2026-02-04 00:54:13 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:13.404618 | orchestrator | 2026-02-04 00:54:13 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:13.405516 | orchestrator | 2026-02-04 00:54:13 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:13.405971 | orchestrator | 2026-02-04 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:16.486768 | orchestrator | 2026-02-04 00:54:16 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:16.486836 | orchestrator | 2026-02-04 00:54:16 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:54:16.486850 | orchestrator | 2026-02-04 00:54:16 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:16.486858 | orchestrator | 2026-02-04 00:54:16 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:16.486866 | orchestrator | 2026-02-04 00:54:16 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:16.486875 | orchestrator | 2026-02-04 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:19.641492 | orchestrator | 2026-02-04 00:54:19 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:19.643881 | orchestrator | 2026-02-04 00:54:19 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:54:19.644709 | orchestrator | 2026-02-04 00:54:19 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:19.645775 | orchestrator | 2026-02-04 00:54:19 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:19.648546 | orchestrator | 2026-02-04 00:54:19 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:19.648616 | orchestrator | 2026-02-04 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:22.704152 | orchestrator | 2026-02-04 00:54:22 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:22.704971 | orchestrator | 2026-02-04 00:54:22 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state STARTED 2026-02-04 00:54:22.705860 | orchestrator | 2026-02-04 00:54:22 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:22.706548 | orchestrator | 2026-02-04 00:54:22 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:22.707648 | orchestrator | 2026-02-04 00:54:22 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:22.707679 | orchestrator | 2026-02-04 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:25.754469 | orchestrator | 2026-02-04 00:54:25 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:25.756056 | orchestrator | 2026-02-04 00:54:25 | INFO  | Task bf6ef7d2-e02d-4506-bb9b-726c35056870 is in state SUCCESS 2026-02-04 00:54:25.757622 | orchestrator | 2026-02-04 00:54:25.757661 | orchestrator | 2026-02-04 00:54:25.757666 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-04 00:54:25.757672 | orchestrator | 2026-02-04 00:54:25.757676 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-04 00:54:25.757755 | orchestrator | Wednesday 04 February 2026 00:48:50 +0000 (0:00:00.227) 0:00:00.227 **** 2026-02-04 00:54:25.757761 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:54:25.757767 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:54:25.757771 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:54:25.757775 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.757779 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.757783 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.757787 | orchestrator | 2026-02-04 00:54:25.757791 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-04 00:54:25.757795 | orchestrator | Wednesday 04 February 2026 00:48:51 +0000 (0:00:01.001) 0:00:01.228 **** 2026-02-04 00:54:25.757800 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.757804 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.757808 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.757812 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.757816 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.757822 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.757828 | orchestrator | 2026-02-04 00:54:25.757834 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-04 00:54:25.757840 | orchestrator | Wednesday 04 February 2026 00:48:53 +0000 (0:00:01.163) 0:00:02.392 **** 2026-02-04 00:54:25.757849 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.757857 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.757873 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.757881 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.757915 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.757921 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.757928 | orchestrator | 2026-02-04 00:54:25.757934 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-04 00:54:25.757940 | orchestrator | Wednesday 04 February 2026 00:48:54 +0000 (0:00:01.063) 0:00:03.455 **** 2026-02-04 00:54:25.757946 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:54:25.757952 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.757958 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:54:25.757963 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:54:25.757969 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.757975 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.757980 | orchestrator | 2026-02-04 00:54:25.757986 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-04 00:54:25.757992 | orchestrator | Wednesday 04 February 2026 00:48:56 +0000 (0:00:02.910) 0:00:06.365 **** 2026-02-04 00:54:25.757997 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:54:25.758004 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.758011 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.758067 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.758071 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:54:25.758075 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:54:25.758079 | orchestrator | 2026-02-04 00:54:25.758083 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-04 00:54:25.758088 | orchestrator | Wednesday 04 February 2026 00:48:59 +0000 (0:00:02.437) 0:00:08.803 **** 2026-02-04 00:54:25.758091 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:54:25.758095 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:54:25.758099 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:54:25.758104 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.758108 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.758140 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.758147 | orchestrator | 2026-02-04 00:54:25.758153 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-04 00:54:25.758159 | orchestrator | Wednesday 04 February 2026 00:49:01 +0000 (0:00:02.264) 0:00:11.068 **** 2026-02-04 00:54:25.758166 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.758172 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.758187 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.758193 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.758200 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.758248 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.758255 | orchestrator | 2026-02-04 00:54:25.758262 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-04 00:54:25.758268 | orchestrator | Wednesday 04 February 2026 00:49:02 +0000 (0:00:01.143) 0:00:12.212 **** 2026-02-04 00:54:25.758276 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.758281 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.758286 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.758291 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.758296 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.758300 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.758305 | orchestrator | 2026-02-04 00:54:25.758310 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-04 00:54:25.758316 | orchestrator | Wednesday 04 February 2026 00:49:04 +0000 (0:00:01.169) 0:00:13.381 **** 2026-02-04 00:54:25.758322 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:54:25.758331 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:54:25.758339 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.758346 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:54:25.758352 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:54:25.758359 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.758365 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:54:25.758372 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:54:25.758378 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.758384 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:54:25.758405 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:54:25.758412 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.758419 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:54:25.758425 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:54:25.758432 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.758438 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:54:25.758444 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:54:25.758451 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.758457 | orchestrator | 2026-02-04 00:54:25.758463 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-04 00:54:25.758469 | orchestrator | Wednesday 04 February 2026 00:49:05 +0000 (0:00:01.601) 0:00:14.983 **** 2026-02-04 00:54:25.758475 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.758482 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.758489 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.758494 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.758500 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.758507 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.758513 | orchestrator | 2026-02-04 00:54:25.758520 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-04 00:54:25.758529 | orchestrator | Wednesday 04 February 2026 00:49:08 +0000 (0:00:02.435) 0:00:17.419 **** 2026-02-04 00:54:25.758535 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:54:25.758542 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:54:25.758549 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:54:25.758554 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.758569 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.758576 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.758581 | orchestrator | 2026-02-04 00:54:25.758587 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-04 00:54:25.758592 | orchestrator | Wednesday 04 February 2026 00:49:10 +0000 (0:00:02.400) 0:00:19.819 **** 2026-02-04 00:54:25.758598 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:54:25.758604 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.758611 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:54:25.758616 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:54:25.758622 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.758628 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.758634 | orchestrator | 2026-02-04 00:54:25.758641 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-04 00:54:25.758647 | orchestrator | Wednesday 04 February 2026 00:49:17 +0000 (0:00:06.734) 0:00:26.553 **** 2026-02-04 00:54:25.758653 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.758659 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.758665 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.758672 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.758678 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.758683 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.758689 | orchestrator | 2026-02-04 00:54:25.758695 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-04 00:54:25.758701 | orchestrator | Wednesday 04 February 2026 00:49:20 +0000 (0:00:03.198) 0:00:29.753 **** 2026-02-04 00:54:25.758706 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.758713 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.758719 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.758726 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.758732 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.758738 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.758744 | orchestrator | 2026-02-04 00:54:25.758750 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-04 00:54:25.758758 | orchestrator | Wednesday 04 February 2026 00:49:24 +0000 (0:00:04.274) 0:00:34.027 **** 2026-02-04 00:54:25.758765 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.759237 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.759267 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.759271 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.759275 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.759279 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.759283 | orchestrator | 2026-02-04 00:54:25.759287 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-04 00:54:25.759292 | orchestrator | Wednesday 04 February 2026 00:49:27 +0000 (0:00:02.351) 0:00:36.379 **** 2026-02-04 00:54:25.759296 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-04 00:54:25.759300 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-04 00:54:25.759304 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.759308 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-04 00:54:25.759312 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-04 00:54:25.759315 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.759319 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-04 00:54:25.759323 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-04 00:54:25.759330 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.759334 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-04 00:54:25.759344 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-04 00:54:25.759348 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.759352 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-04 00:54:25.759364 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-04 00:54:25.759368 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.759371 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-04 00:54:25.759375 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-04 00:54:25.759379 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.759383 | orchestrator | 2026-02-04 00:54:25.759387 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-04 00:54:25.759402 | orchestrator | Wednesday 04 February 2026 00:49:30 +0000 (0:00:03.221) 0:00:39.600 **** 2026-02-04 00:54:25.759406 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.759410 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.759414 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.759418 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.759421 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.759425 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.759429 | orchestrator | 2026-02-04 00:54:25.759433 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-04 00:54:25.759437 | orchestrator | Wednesday 04 February 2026 00:49:31 +0000 (0:00:01.462) 0:00:41.062 **** 2026-02-04 00:54:25.759441 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.759445 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.759449 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.759453 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.759456 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.759460 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.759464 | orchestrator | 2026-02-04 00:54:25.759468 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-04 00:54:25.759471 | orchestrator | 2026-02-04 00:54:25.759475 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-04 00:54:25.759479 | orchestrator | Wednesday 04 February 2026 00:49:34 +0000 (0:00:03.069) 0:00:44.131 **** 2026-02-04 00:54:25.759483 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.759487 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.759491 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.759494 | orchestrator | 2026-02-04 00:54:25.759498 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-04 00:54:25.759502 | orchestrator | Wednesday 04 February 2026 00:49:37 +0000 (0:00:02.280) 0:00:46.412 **** 2026-02-04 00:54:25.759506 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.759510 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.759514 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.759517 | orchestrator | 2026-02-04 00:54:25.759521 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-04 00:54:25.759525 | orchestrator | Wednesday 04 February 2026 00:49:38 +0000 (0:00:01.710) 0:00:48.123 **** 2026-02-04 00:54:25.759529 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.759533 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.759536 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.759540 | orchestrator | 2026-02-04 00:54:25.759544 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-04 00:54:25.759548 | orchestrator | Wednesday 04 February 2026 00:49:39 +0000 (0:00:01.038) 0:00:49.161 **** 2026-02-04 00:54:25.759552 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.759556 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.759559 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.759563 | orchestrator | 2026-02-04 00:54:25.759567 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-04 00:54:25.759571 | orchestrator | Wednesday 04 February 2026 00:49:41 +0000 (0:00:01.532) 0:00:50.694 **** 2026-02-04 00:54:25.759575 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.759579 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.759582 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.759586 | orchestrator | 2026-02-04 00:54:25.759596 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-04 00:54:25.759603 | orchestrator | Wednesday 04 February 2026 00:49:42 +0000 (0:00:01.256) 0:00:51.951 **** 2026-02-04 00:54:25.759608 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.759617 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.759625 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.759630 | orchestrator | 2026-02-04 00:54:25.759636 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-04 00:54:25.759642 | orchestrator | Wednesday 04 February 2026 00:49:44 +0000 (0:00:02.004) 0:00:53.956 **** 2026-02-04 00:54:25.759648 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.759654 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.759659 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.759665 | orchestrator | 2026-02-04 00:54:25.759672 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-04 00:54:25.759677 | orchestrator | Wednesday 04 February 2026 00:49:46 +0000 (0:00:01.934) 0:00:55.891 **** 2026-02-04 00:54:25.759684 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:54:25.759691 | orchestrator | 2026-02-04 00:54:25.759697 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-04 00:54:25.759703 | orchestrator | Wednesday 04 February 2026 00:49:47 +0000 (0:00:00.602) 0:00:56.493 **** 2026-02-04 00:54:25.759710 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.759716 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.759722 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.759727 | orchestrator | 2026-02-04 00:54:25.759734 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-04 00:54:25.759740 | orchestrator | Wednesday 04 February 2026 00:49:52 +0000 (0:00:04.955) 0:01:01.449 **** 2026-02-04 00:54:25.759747 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.759752 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.759760 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.759764 | orchestrator | 2026-02-04 00:54:25.759771 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-04 00:54:25.759775 | orchestrator | Wednesday 04 February 2026 00:49:53 +0000 (0:00:01.212) 0:01:02.662 **** 2026-02-04 00:54:25.759779 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.759783 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.759787 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.759790 | orchestrator | 2026-02-04 00:54:25.759794 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-04 00:54:25.759798 | orchestrator | Wednesday 04 February 2026 00:49:54 +0000 (0:00:00.930) 0:01:03.592 **** 2026-02-04 00:54:25.759802 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.759806 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.759809 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.759813 | orchestrator | 2026-02-04 00:54:25.759817 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-04 00:54:25.759825 | orchestrator | Wednesday 04 February 2026 00:49:56 +0000 (0:00:02.368) 0:01:05.960 **** 2026-02-04 00:54:25.759831 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.759837 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.759843 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.759848 | orchestrator | 2026-02-04 00:54:25.759855 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-04 00:54:25.759861 | orchestrator | Wednesday 04 February 2026 00:49:58 +0000 (0:00:01.552) 0:01:07.512 **** 2026-02-04 00:54:25.759867 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.759872 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.759877 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.759883 | orchestrator | 2026-02-04 00:54:25.759888 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-04 00:54:25.759900 | orchestrator | Wednesday 04 February 2026 00:49:58 +0000 (0:00:00.673) 0:01:08.186 **** 2026-02-04 00:54:25.759907 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.759912 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.759917 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.759922 | orchestrator | 2026-02-04 00:54:25.759928 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-04 00:54:25.759933 | orchestrator | Wednesday 04 February 2026 00:50:01 +0000 (0:00:02.606) 0:01:10.793 **** 2026-02-04 00:54:25.759939 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.759944 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.759949 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.759955 | orchestrator | 2026-02-04 00:54:25.759960 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-04 00:54:25.759966 | orchestrator | Wednesday 04 February 2026 00:50:04 +0000 (0:00:02.924) 0:01:13.718 **** 2026-02-04 00:54:25.759971 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.759977 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.759983 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.759988 | orchestrator | 2026-02-04 00:54:25.759993 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-04 00:54:25.759999 | orchestrator | Wednesday 04 February 2026 00:50:05 +0000 (0:00:00.830) 0:01:14.549 **** 2026-02-04 00:54:25.760004 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 00:54:25.760012 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 00:54:25.760018 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 00:54:25.760024 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 00:54:25.760030 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 00:54:25.760035 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 00:54:25.760040 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 00:54:25.760048 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 00:54:25.760054 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 00:54:25.760060 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 00:54:25.760066 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 00:54:25.760072 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 00:54:25.760079 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.760085 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.760091 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.760097 | orchestrator | 2026-02-04 00:54:25.760103 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-04 00:54:25.760109 | orchestrator | Wednesday 04 February 2026 00:50:49 +0000 (0:00:43.886) 0:01:58.435 **** 2026-02-04 00:54:25.760115 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.760127 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.760135 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.760147 | orchestrator | 2026-02-04 00:54:25.760153 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-04 00:54:25.760159 | orchestrator | Wednesday 04 February 2026 00:50:49 +0000 (0:00:00.589) 0:01:59.024 **** 2026-02-04 00:54:25.760165 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.760171 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.760177 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.760183 | orchestrator | 2026-02-04 00:54:25.760188 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-04 00:54:25.760194 | orchestrator | Wednesday 04 February 2026 00:50:50 +0000 (0:00:01.293) 0:02:00.318 **** 2026-02-04 00:54:25.760200 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.760379 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.760388 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.760395 | orchestrator | 2026-02-04 00:54:25.760412 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-04 00:54:25.760418 | orchestrator | Wednesday 04 February 2026 00:50:54 +0000 (0:00:03.081) 0:02:03.400 **** 2026-02-04 00:54:25.760424 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.760430 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.760437 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.760444 | orchestrator | 2026-02-04 00:54:25.760452 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-04 00:54:25.760458 | orchestrator | Wednesday 04 February 2026 00:51:19 +0000 (0:00:25.846) 0:02:29.246 **** 2026-02-04 00:54:25.760464 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.760471 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.760477 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.760482 | orchestrator | 2026-02-04 00:54:25.760488 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-04 00:54:25.760493 | orchestrator | Wednesday 04 February 2026 00:51:20 +0000 (0:00:00.771) 0:02:30.018 **** 2026-02-04 00:54:25.760499 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.760505 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.760511 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.760517 | orchestrator | 2026-02-04 00:54:25.760522 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-04 00:54:25.760528 | orchestrator | Wednesday 04 February 2026 00:51:21 +0000 (0:00:00.765) 0:02:30.784 **** 2026-02-04 00:54:25.760534 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.760539 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.760544 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.760550 | orchestrator | 2026-02-04 00:54:25.760555 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-04 00:54:25.760561 | orchestrator | Wednesday 04 February 2026 00:51:22 +0000 (0:00:01.014) 0:02:31.798 **** 2026-02-04 00:54:25.760567 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.760573 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.760579 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.760585 | orchestrator | 2026-02-04 00:54:25.760590 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-04 00:54:25.760597 | orchestrator | Wednesday 04 February 2026 00:51:24 +0000 (0:00:02.308) 0:02:34.107 **** 2026-02-04 00:54:25.760603 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.760609 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.760615 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.760620 | orchestrator | 2026-02-04 00:54:25.760626 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-04 00:54:25.760633 | orchestrator | Wednesday 04 February 2026 00:51:25 +0000 (0:00:00.980) 0:02:35.088 **** 2026-02-04 00:54:25.760639 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.760645 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.760651 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.760657 | orchestrator | 2026-02-04 00:54:25.760663 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-04 00:54:25.760680 | orchestrator | Wednesday 04 February 2026 00:51:26 +0000 (0:00:00.965) 0:02:36.054 **** 2026-02-04 00:54:25.760686 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.760692 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.760698 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.760703 | orchestrator | 2026-02-04 00:54:25.760710 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-04 00:54:25.760716 | orchestrator | Wednesday 04 February 2026 00:51:27 +0000 (0:00:00.908) 0:02:36.963 **** 2026-02-04 00:54:25.760723 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.760729 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.760736 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.760741 | orchestrator | 2026-02-04 00:54:25.760747 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-04 00:54:25.760753 | orchestrator | Wednesday 04 February 2026 00:51:28 +0000 (0:00:01.297) 0:02:38.260 **** 2026-02-04 00:54:25.760759 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:25.760765 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:25.760772 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:25.760778 | orchestrator | 2026-02-04 00:54:25.760784 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-04 00:54:25.760791 | orchestrator | Wednesday 04 February 2026 00:51:29 +0000 (0:00:01.075) 0:02:39.336 **** 2026-02-04 00:54:25.760797 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.760804 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.760810 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.760816 | orchestrator | 2026-02-04 00:54:25.760823 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-04 00:54:25.760829 | orchestrator | Wednesday 04 February 2026 00:51:30 +0000 (0:00:00.293) 0:02:39.630 **** 2026-02-04 00:54:25.760834 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.760840 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.760847 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.760853 | orchestrator | 2026-02-04 00:54:25.760859 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-04 00:54:25.760865 | orchestrator | Wednesday 04 February 2026 00:51:30 +0000 (0:00:00.285) 0:02:39.916 **** 2026-02-04 00:54:25.760872 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.760878 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.760891 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.760896 | orchestrator | 2026-02-04 00:54:25.760900 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-04 00:54:25.760904 | orchestrator | Wednesday 04 February 2026 00:51:31 +0000 (0:00:00.901) 0:02:40.818 **** 2026-02-04 00:54:25.760908 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.760911 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.760915 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.760919 | orchestrator | 2026-02-04 00:54:25.760923 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-04 00:54:25.760928 | orchestrator | Wednesday 04 February 2026 00:51:32 +0000 (0:00:00.659) 0:02:41.477 **** 2026-02-04 00:54:25.760932 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 00:54:25.760946 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 00:54:25.760950 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 00:54:25.760954 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 00:54:25.760957 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 00:54:25.760961 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 00:54:25.760971 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 00:54:25.760976 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 00:54:25.760979 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 00:54:25.760983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-04 00:54:25.760987 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 00:54:25.760991 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 00:54:25.760995 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-04 00:54:25.760998 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 00:54:25.761002 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 00:54:25.761006 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 00:54:25.761010 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 00:54:25.761013 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 00:54:25.761017 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 00:54:25.761021 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 00:54:25.761025 | orchestrator | 2026-02-04 00:54:25.761028 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-04 00:54:25.761032 | orchestrator | 2026-02-04 00:54:25.761036 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-04 00:54:25.761040 | orchestrator | Wednesday 04 February 2026 00:51:35 +0000 (0:00:03.257) 0:02:44.734 **** 2026-02-04 00:54:25.761044 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:54:25.761047 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:54:25.761051 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:54:25.761055 | orchestrator | 2026-02-04 00:54:25.761059 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-04 00:54:25.761062 | orchestrator | Wednesday 04 February 2026 00:51:35 +0000 (0:00:00.489) 0:02:45.224 **** 2026-02-04 00:54:25.761066 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:54:25.761070 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:54:25.761074 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:54:25.761078 | orchestrator | 2026-02-04 00:54:25.761081 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-04 00:54:25.761085 | orchestrator | Wednesday 04 February 2026 00:51:36 +0000 (0:00:00.559) 0:02:45.783 **** 2026-02-04 00:54:25.761089 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:54:25.761093 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:54:25.761097 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:54:25.761100 | orchestrator | 2026-02-04 00:54:25.761104 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-04 00:54:25.761108 | orchestrator | Wednesday 04 February 2026 00:51:36 +0000 (0:00:00.305) 0:02:46.088 **** 2026-02-04 00:54:25.761112 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:54:25.761116 | orchestrator | 2026-02-04 00:54:25.761121 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-04 00:54:25.761127 | orchestrator | Wednesday 04 February 2026 00:51:37 +0000 (0:00:00.620) 0:02:46.708 **** 2026-02-04 00:54:25.761133 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.761138 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.761144 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.761160 | orchestrator | 2026-02-04 00:54:25.761166 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-04 00:54:25.761172 | orchestrator | Wednesday 04 February 2026 00:51:37 +0000 (0:00:00.286) 0:02:46.995 **** 2026-02-04 00:54:25.761177 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.761187 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.761193 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.761199 | orchestrator | 2026-02-04 00:54:25.761225 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-04 00:54:25.761232 | orchestrator | Wednesday 04 February 2026 00:51:37 +0000 (0:00:00.327) 0:02:47.322 **** 2026-02-04 00:54:25.761238 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.761244 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.761250 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.761255 | orchestrator | 2026-02-04 00:54:25.761261 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-04 00:54:25.761268 | orchestrator | Wednesday 04 February 2026 00:51:38 +0000 (0:00:00.329) 0:02:47.652 **** 2026-02-04 00:54:25.761274 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:54:25.761280 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:54:25.761286 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:54:25.761292 | orchestrator | 2026-02-04 00:54:25.761306 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-04 00:54:25.761313 | orchestrator | Wednesday 04 February 2026 00:51:39 +0000 (0:00:01.135) 0:02:48.787 **** 2026-02-04 00:54:25.761317 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:54:25.761321 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:54:25.761325 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:54:25.761328 | orchestrator | 2026-02-04 00:54:25.761332 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-04 00:54:25.761336 | orchestrator | Wednesday 04 February 2026 00:51:40 +0000 (0:00:01.367) 0:02:50.154 **** 2026-02-04 00:54:25.761340 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:54:25.761344 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:54:25.761347 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:54:25.761351 | orchestrator | 2026-02-04 00:54:25.761355 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-04 00:54:25.761358 | orchestrator | Wednesday 04 February 2026 00:51:42 +0000 (0:00:01.482) 0:02:51.636 **** 2026-02-04 00:54:25.761362 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:54:25.761366 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:54:25.761370 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:54:25.761374 | orchestrator | 2026-02-04 00:54:25.761378 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-04 00:54:25.761382 | orchestrator | 2026-02-04 00:54:25.761386 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-04 00:54:25.761389 | orchestrator | Wednesday 04 February 2026 00:51:53 +0000 (0:00:11.215) 0:03:02.852 **** 2026-02-04 00:54:25.761393 | orchestrator | ok: [testbed-manager] 2026-02-04 00:54:25.761397 | orchestrator | 2026-02-04 00:54:25.761401 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-04 00:54:25.761404 | orchestrator | Wednesday 04 February 2026 00:51:54 +0000 (0:00:01.065) 0:03:03.917 **** 2026-02-04 00:54:25.761408 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:25.761412 | orchestrator | 2026-02-04 00:54:25.761416 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-04 00:54:25.761419 | orchestrator | Wednesday 04 February 2026 00:51:55 +0000 (0:00:00.534) 0:03:04.452 **** 2026-02-04 00:54:25.761423 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-04 00:54:25.761427 | orchestrator | 2026-02-04 00:54:25.761431 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-04 00:54:25.761434 | orchestrator | Wednesday 04 February 2026 00:51:55 +0000 (0:00:00.657) 0:03:05.110 **** 2026-02-04 00:54:25.761438 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:25.761447 | orchestrator | 2026-02-04 00:54:25.761451 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-04 00:54:25.761455 | orchestrator | Wednesday 04 February 2026 00:51:56 +0000 (0:00:01.249) 0:03:06.359 **** 2026-02-04 00:54:25.761459 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:25.761462 | orchestrator | 2026-02-04 00:54:25.761466 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-04 00:54:25.761470 | orchestrator | Wednesday 04 February 2026 00:51:57 +0000 (0:00:00.819) 0:03:07.179 **** 2026-02-04 00:54:25.761474 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 00:54:25.761477 | orchestrator | 2026-02-04 00:54:25.761481 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-04 00:54:25.761485 | orchestrator | Wednesday 04 February 2026 00:51:59 +0000 (0:00:02.172) 0:03:09.351 **** 2026-02-04 00:54:25.761489 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 00:54:25.761492 | orchestrator | 2026-02-04 00:54:25.761496 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-04 00:54:25.761500 | orchestrator | Wednesday 04 February 2026 00:52:01 +0000 (0:00:01.178) 0:03:10.530 **** 2026-02-04 00:54:25.761504 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:25.761507 | orchestrator | 2026-02-04 00:54:25.761512 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-04 00:54:25.761515 | orchestrator | Wednesday 04 February 2026 00:52:02 +0000 (0:00:00.852) 0:03:11.383 **** 2026-02-04 00:54:25.761519 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:25.761523 | orchestrator | 2026-02-04 00:54:25.761527 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-04 00:54:25.761531 | orchestrator | 2026-02-04 00:54:25.761534 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-04 00:54:25.761538 | orchestrator | Wednesday 04 February 2026 00:52:02 +0000 (0:00:00.539) 0:03:11.923 **** 2026-02-04 00:54:25.761542 | orchestrator | ok: [testbed-manager] 2026-02-04 00:54:25.761546 | orchestrator | 2026-02-04 00:54:25.761549 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-04 00:54:25.761553 | orchestrator | Wednesday 04 February 2026 00:52:02 +0000 (0:00:00.163) 0:03:12.086 **** 2026-02-04 00:54:25.761557 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:54:25.761561 | orchestrator | 2026-02-04 00:54:25.761564 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-04 00:54:25.761568 | orchestrator | Wednesday 04 February 2026 00:52:02 +0000 (0:00:00.268) 0:03:12.355 **** 2026-02-04 00:54:25.761576 | orchestrator | ok: [testbed-manager] 2026-02-04 00:54:25.761580 | orchestrator | 2026-02-04 00:54:25.761584 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-04 00:54:25.761588 | orchestrator | Wednesday 04 February 2026 00:52:04 +0000 (0:00:01.190) 0:03:13.545 **** 2026-02-04 00:54:25.761591 | orchestrator | ok: [testbed-manager] 2026-02-04 00:54:25.761595 | orchestrator | 2026-02-04 00:54:25.761599 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-04 00:54:25.761603 | orchestrator | Wednesday 04 February 2026 00:52:06 +0000 (0:00:02.180) 0:03:15.725 **** 2026-02-04 00:54:25.761607 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:25.761610 | orchestrator | 2026-02-04 00:54:25.761614 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-04 00:54:25.761618 | orchestrator | Wednesday 04 February 2026 00:52:07 +0000 (0:00:01.154) 0:03:16.880 **** 2026-02-04 00:54:25.761622 | orchestrator | ok: [testbed-manager] 2026-02-04 00:54:25.761625 | orchestrator | 2026-02-04 00:54:25.761633 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-04 00:54:25.761637 | orchestrator | Wednesday 04 February 2026 00:52:08 +0000 (0:00:00.625) 0:03:17.505 **** 2026-02-04 00:54:25.761641 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:25.761644 | orchestrator | 2026-02-04 00:54:25.761648 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-04 00:54:25.761655 | orchestrator | Wednesday 04 February 2026 00:52:20 +0000 (0:00:12.832) 0:03:30.338 **** 2026-02-04 00:54:25.761659 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:25.761663 | orchestrator | 2026-02-04 00:54:25.761667 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-04 00:54:25.761671 | orchestrator | Wednesday 04 February 2026 00:52:38 +0000 (0:00:17.270) 0:03:47.609 **** 2026-02-04 00:54:25.761674 | orchestrator | ok: [testbed-manager] 2026-02-04 00:54:25.761678 | orchestrator | 2026-02-04 00:54:25.761682 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-04 00:54:25.761686 | orchestrator | 2026-02-04 00:54:25.761691 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-04 00:54:25.761697 | orchestrator | Wednesday 04 February 2026 00:52:39 +0000 (0:00:00.995) 0:03:48.605 **** 2026-02-04 00:54:25.761702 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.761708 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.761716 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.761726 | orchestrator | 2026-02-04 00:54:25.761735 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-04 00:54:25.761740 | orchestrator | Wednesday 04 February 2026 00:52:39 +0000 (0:00:00.410) 0:03:49.016 **** 2026-02-04 00:54:25.761746 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.761752 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.761758 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.761764 | orchestrator | 2026-02-04 00:54:25.761769 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-04 00:54:25.761775 | orchestrator | Wednesday 04 February 2026 00:52:40 +0000 (0:00:00.451) 0:03:49.467 **** 2026-02-04 00:54:25.761782 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:54:25.761788 | orchestrator | 2026-02-04 00:54:25.761793 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-04 00:54:25.761799 | orchestrator | Wednesday 04 February 2026 00:52:40 +0000 (0:00:00.761) 0:03:50.229 **** 2026-02-04 00:54:25.761805 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 00:54:25.761811 | orchestrator | 2026-02-04 00:54:25.761817 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-04 00:54:25.761823 | orchestrator | Wednesday 04 February 2026 00:52:41 +0000 (0:00:00.891) 0:03:51.120 **** 2026-02-04 00:54:25.761830 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:54:25.761837 | orchestrator | 2026-02-04 00:54:25.761843 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-04 00:54:25.761850 | orchestrator | Wednesday 04 February 2026 00:52:42 +0000 (0:00:01.226) 0:03:52.346 **** 2026-02-04 00:54:25.761855 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.761861 | orchestrator | 2026-02-04 00:54:25.761886 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-04 00:54:25.761892 | orchestrator | Wednesday 04 February 2026 00:52:43 +0000 (0:00:00.200) 0:03:52.547 **** 2026-02-04 00:54:25.761898 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:54:25.761904 | orchestrator | 2026-02-04 00:54:25.761910 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-04 00:54:25.761916 | orchestrator | Wednesday 04 February 2026 00:52:44 +0000 (0:00:01.389) 0:03:53.936 **** 2026-02-04 00:54:25.761923 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.761929 | orchestrator | 2026-02-04 00:54:25.761934 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-04 00:54:25.761941 | orchestrator | Wednesday 04 February 2026 00:52:44 +0000 (0:00:00.225) 0:03:54.162 **** 2026-02-04 00:54:25.761947 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.761952 | orchestrator | 2026-02-04 00:54:25.761958 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-04 00:54:25.761970 | orchestrator | Wednesday 04 February 2026 00:52:44 +0000 (0:00:00.207) 0:03:54.369 **** 2026-02-04 00:54:25.761976 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.761982 | orchestrator | 2026-02-04 00:54:25.761988 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-04 00:54:25.761994 | orchestrator | Wednesday 04 February 2026 00:52:45 +0000 (0:00:00.202) 0:03:54.572 **** 2026-02-04 00:54:25.762000 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.762006 | orchestrator | 2026-02-04 00:54:25.762063 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-04 00:54:25.762075 | orchestrator | Wednesday 04 February 2026 00:52:45 +0000 (0:00:00.154) 0:03:54.726 **** 2026-02-04 00:54:25.762082 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 00:54:25.762088 | orchestrator | 2026-02-04 00:54:25.762095 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-04 00:54:25.762106 | orchestrator | Wednesday 04 February 2026 00:52:52 +0000 (0:00:07.463) 0:04:02.190 **** 2026-02-04 00:54:25.762113 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-04 00:54:25.762119 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-04 00:54:25.762125 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-04 00:54:25.762131 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-04 00:54:25.762137 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-04 00:54:25.762142 | orchestrator | 2026-02-04 00:54:25.762148 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-04 00:54:25.762154 | orchestrator | Wednesday 04 February 2026 00:53:43 +0000 (0:00:50.646) 0:04:52.836 **** 2026-02-04 00:54:25.762169 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:54:25.762175 | orchestrator | 2026-02-04 00:54:25.762180 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-04 00:54:25.762187 | orchestrator | Wednesday 04 February 2026 00:53:44 +0000 (0:00:01.499) 0:04:54.336 **** 2026-02-04 00:54:25.762224 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 00:54:25.762230 | orchestrator | 2026-02-04 00:54:25.762236 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-04 00:54:25.762242 | orchestrator | Wednesday 04 February 2026 00:53:47 +0000 (0:00:02.319) 0:04:56.656 **** 2026-02-04 00:54:25.762248 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 00:54:25.762254 | orchestrator | 2026-02-04 00:54:25.762259 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-04 00:54:25.762265 | orchestrator | Wednesday 04 February 2026 00:53:48 +0000 (0:00:01.377) 0:04:58.033 **** 2026-02-04 00:54:25.762271 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.762277 | orchestrator | 2026-02-04 00:54:25.762282 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-04 00:54:25.762288 | orchestrator | Wednesday 04 February 2026 00:53:48 +0000 (0:00:00.148) 0:04:58.182 **** 2026-02-04 00:54:25.762293 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-04 00:54:25.762306 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-04 00:54:25.762313 | orchestrator | 2026-02-04 00:54:25.762318 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-04 00:54:25.762323 | orchestrator | Wednesday 04 February 2026 00:53:51 +0000 (0:00:02.460) 0:05:00.643 **** 2026-02-04 00:54:25.762329 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.762335 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.762340 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.762346 | orchestrator | 2026-02-04 00:54:25.762352 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-04 00:54:25.762358 | orchestrator | Wednesday 04 February 2026 00:53:51 +0000 (0:00:00.375) 0:05:01.018 **** 2026-02-04 00:54:25.762371 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.762378 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.762385 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.762390 | orchestrator | 2026-02-04 00:54:25.762396 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-04 00:54:25.762402 | orchestrator | 2026-02-04 00:54:25.762407 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-04 00:54:25.762413 | orchestrator | Wednesday 04 February 2026 00:53:53 +0000 (0:00:01.623) 0:05:02.642 **** 2026-02-04 00:54:25.762420 | orchestrator | ok: [testbed-manager] 2026-02-04 00:54:25.762426 | orchestrator | 2026-02-04 00:54:25.762432 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-04 00:54:25.762438 | orchestrator | Wednesday 04 February 2026 00:53:53 +0000 (0:00:00.219) 0:05:02.862 **** 2026-02-04 00:54:25.762444 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:54:25.762451 | orchestrator | 2026-02-04 00:54:25.762457 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-04 00:54:25.762463 | orchestrator | Wednesday 04 February 2026 00:53:53 +0000 (0:00:00.292) 0:05:03.155 **** 2026-02-04 00:54:25.762470 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:25.762475 | orchestrator | 2026-02-04 00:54:25.762480 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-04 00:54:25.762486 | orchestrator | 2026-02-04 00:54:25.762492 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-04 00:54:25.762498 | orchestrator | Wednesday 04 February 2026 00:54:01 +0000 (0:00:07.285) 0:05:10.440 **** 2026-02-04 00:54:25.762504 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:54:25.762510 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:54:25.762516 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:54:25.762522 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:25.762528 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:25.762534 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:25.762541 | orchestrator | 2026-02-04 00:54:25.762547 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-04 00:54:25.762553 | orchestrator | Wednesday 04 February 2026 00:54:02 +0000 (0:00:01.712) 0:05:12.153 **** 2026-02-04 00:54:25.762560 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 00:54:25.762567 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 00:54:25.762573 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 00:54:25.762579 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 00:54:25.762585 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 00:54:25.762595 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 00:54:25.762602 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 00:54:25.762608 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 00:54:25.762614 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 00:54:25.762620 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 00:54:25.762626 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 00:54:25.762632 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 00:54:25.762645 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 00:54:25.762651 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 00:54:25.762658 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 00:54:25.762670 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 00:54:25.762677 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 00:54:25.762682 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 00:54:25.762689 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 00:54:25.762695 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 00:54:25.762701 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 00:54:25.762707 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 00:54:25.762713 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 00:54:25.762719 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 00:54:25.762725 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 00:54:25.762731 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 00:54:25.762738 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 00:54:25.762744 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 00:54:25.762750 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 00:54:25.762756 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 00:54:25.762762 | orchestrator | 2026-02-04 00:54:25.762769 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-04 00:54:25.762775 | orchestrator | Wednesday 04 February 2026 00:54:20 +0000 (0:00:17.622) 0:05:29.775 **** 2026-02-04 00:54:25.762781 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.762788 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.762794 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.762800 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.762807 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.762813 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.762819 | orchestrator | 2026-02-04 00:54:25.762825 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-04 00:54:25.762831 | orchestrator | Wednesday 04 February 2026 00:54:21 +0000 (0:00:01.073) 0:05:30.848 **** 2026-02-04 00:54:25.762838 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:54:25.762844 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:54:25.762850 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:54:25.762856 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:25.762861 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:25.762867 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:25.762872 | orchestrator | 2026-02-04 00:54:25.762878 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:54:25.762884 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:54:25.762893 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-04 00:54:25.762899 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 00:54:25.762905 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 00:54:25.762911 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 00:54:25.762926 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 00:54:25.762940 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 00:54:25.762947 | orchestrator | 2026-02-04 00:54:25.762954 | orchestrator | 2026-02-04 00:54:25.762960 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:54:25.762967 | orchestrator | Wednesday 04 February 2026 00:54:22 +0000 (0:00:00.705) 0:05:31.554 **** 2026-02-04 00:54:25.762973 | orchestrator | =============================================================================== 2026-02-04 00:54:25.762980 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 50.65s 2026-02-04 00:54:25.762987 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.89s 2026-02-04 00:54:25.762993 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.85s 2026-02-04 00:54:25.763005 | orchestrator | Manage labels ---------------------------------------------------------- 17.62s 2026-02-04 00:54:25.763011 | orchestrator | kubectl : Install required packages ------------------------------------ 17.27s 2026-02-04 00:54:25.763017 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 12.83s 2026-02-04 00:54:25.763023 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.22s 2026-02-04 00:54:25.763029 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 7.46s 2026-02-04 00:54:25.763035 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 7.29s 2026-02-04 00:54:25.763041 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.73s 2026-02-04 00:54:25.763047 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.96s 2026-02-04 00:54:25.763054 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 4.27s 2026-02-04 00:54:25.763061 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.26s 2026-02-04 00:54:25.763068 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 3.22s 2026-02-04 00:54:25.763074 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 3.20s 2026-02-04 00:54:25.763079 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 3.08s 2026-02-04 00:54:25.763085 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.07s 2026-02-04 00:54:25.763090 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.92s 2026-02-04 00:54:25.763096 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.91s 2026-02-04 00:54:25.763102 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.61s 2026-02-04 00:54:25.763108 | orchestrator | 2026-02-04 00:54:25 | INFO  | Task bf5c5209-ebef-40b1-87d0-8e826395a3c6 is in state STARTED 2026-02-04 00:54:25.763114 | orchestrator | 2026-02-04 00:54:25 | INFO  | Task 99d41533-3a2e-46de-9bcb-072c9bc70adc is in state STARTED 2026-02-04 00:54:25.763120 | orchestrator | 2026-02-04 00:54:25 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:25.763126 | orchestrator | 2026-02-04 00:54:25 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:25.765530 | orchestrator | 2026-02-04 00:54:25 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:25.765593 | orchestrator | 2026-02-04 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:28.918681 | orchestrator | 2026-02-04 00:54:28 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:28.918764 | orchestrator | 2026-02-04 00:54:28 | INFO  | Task bf5c5209-ebef-40b1-87d0-8e826395a3c6 is in state STARTED 2026-02-04 00:54:28.918781 | orchestrator | 2026-02-04 00:54:28 | INFO  | Task 99d41533-3a2e-46de-9bcb-072c9bc70adc is in state STARTED 2026-02-04 00:54:28.918795 | orchestrator | 2026-02-04 00:54:28 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:28.918808 | orchestrator | 2026-02-04 00:54:28 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:28.918821 | orchestrator | 2026-02-04 00:54:28 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:28.918845 | orchestrator | 2026-02-04 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:31.950182 | orchestrator | 2026-02-04 00:54:31 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:31.952104 | orchestrator | 2026-02-04 00:54:31 | INFO  | Task bf5c5209-ebef-40b1-87d0-8e826395a3c6 is in state STARTED 2026-02-04 00:54:31.954534 | orchestrator | 2026-02-04 00:54:31 | INFO  | Task 99d41533-3a2e-46de-9bcb-072c9bc70adc is in state STARTED 2026-02-04 00:54:31.957635 | orchestrator | 2026-02-04 00:54:31 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:31.959873 | orchestrator | 2026-02-04 00:54:31 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:31.962441 | orchestrator | 2026-02-04 00:54:31 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:31.962518 | orchestrator | 2026-02-04 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:35.026107 | orchestrator | 2026-02-04 00:54:35 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:35.027405 | orchestrator | 2026-02-04 00:54:35 | INFO  | Task bf5c5209-ebef-40b1-87d0-8e826395a3c6 is in state SUCCESS 2026-02-04 00:54:35.030978 | orchestrator | 2026-02-04 00:54:35 | INFO  | Task 99d41533-3a2e-46de-9bcb-072c9bc70adc is in state STARTED 2026-02-04 00:54:35.033404 | orchestrator | 2026-02-04 00:54:35 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:35.035017 | orchestrator | 2026-02-04 00:54:35 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:35.037646 | orchestrator | 2026-02-04 00:54:35 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:35.038119 | orchestrator | 2026-02-04 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:38.095110 | orchestrator | 2026-02-04 00:54:38 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:38.096335 | orchestrator | 2026-02-04 00:54:38 | INFO  | Task 99d41533-3a2e-46de-9bcb-072c9bc70adc is in state STARTED 2026-02-04 00:54:38.098809 | orchestrator | 2026-02-04 00:54:38 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:38.101306 | orchestrator | 2026-02-04 00:54:38 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:38.102877 | orchestrator | 2026-02-04 00:54:38 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:38.104576 | orchestrator | 2026-02-04 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:41.154805 | orchestrator | 2026-02-04 00:54:41 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:41.154860 | orchestrator | 2026-02-04 00:54:41 | INFO  | Task 99d41533-3a2e-46de-9bcb-072c9bc70adc is in state SUCCESS 2026-02-04 00:54:41.156999 | orchestrator | 2026-02-04 00:54:41 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:41.157061 | orchestrator | 2026-02-04 00:54:41 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:41.158853 | orchestrator | 2026-02-04 00:54:41 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:41.158884 | orchestrator | 2026-02-04 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:44.194435 | orchestrator | 2026-02-04 00:54:44 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:44.196926 | orchestrator | 2026-02-04 00:54:44 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state STARTED 2026-02-04 00:54:44.198685 | orchestrator | 2026-02-04 00:54:44 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:44.199537 | orchestrator | 2026-02-04 00:54:44 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:44.199555 | orchestrator | 2026-02-04 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:47.249428 | orchestrator | 2026-02-04 00:54:47 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:47.253469 | orchestrator | 2026-02-04 00:54:47.253527 | orchestrator | 2026-02-04 00:54:47.253533 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-04 00:54:47.253539 | orchestrator | 2026-02-04 00:54:47.253543 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-04 00:54:47.253548 | orchestrator | Wednesday 04 February 2026 00:54:29 +0000 (0:00:00.294) 0:00:00.294 **** 2026-02-04 00:54:47.253553 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-04 00:54:47.253557 | orchestrator | 2026-02-04 00:54:47.253561 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-04 00:54:47.253565 | orchestrator | Wednesday 04 February 2026 00:54:30 +0000 (0:00:00.871) 0:00:01.165 **** 2026-02-04 00:54:47.253569 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:47.253575 | orchestrator | 2026-02-04 00:54:47.253579 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-04 00:54:47.253583 | orchestrator | Wednesday 04 February 2026 00:54:31 +0000 (0:00:01.635) 0:00:02.801 **** 2026-02-04 00:54:47.253587 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:47.253591 | orchestrator | 2026-02-04 00:54:47.253595 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:54:47.253604 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:54:47.253610 | orchestrator | 2026-02-04 00:54:47.253614 | orchestrator | 2026-02-04 00:54:47.253618 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:54:47.253627 | orchestrator | Wednesday 04 February 2026 00:54:32 +0000 (0:00:00.671) 0:00:03.472 **** 2026-02-04 00:54:47.253631 | orchestrator | =============================================================================== 2026-02-04 00:54:47.253635 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.64s 2026-02-04 00:54:47.253639 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.87s 2026-02-04 00:54:47.253643 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.67s 2026-02-04 00:54:47.253647 | orchestrator | 2026-02-04 00:54:47.253651 | orchestrator | 2026-02-04 00:54:47.253655 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-04 00:54:47.253659 | orchestrator | 2026-02-04 00:54:47.253662 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-04 00:54:47.253666 | orchestrator | Wednesday 04 February 2026 00:54:28 +0000 (0:00:00.274) 0:00:00.274 **** 2026-02-04 00:54:47.253684 | orchestrator | ok: [testbed-manager] 2026-02-04 00:54:47.253690 | orchestrator | 2026-02-04 00:54:47.253694 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-04 00:54:47.253698 | orchestrator | Wednesday 04 February 2026 00:54:29 +0000 (0:00:00.828) 0:00:01.103 **** 2026-02-04 00:54:47.253701 | orchestrator | ok: [testbed-manager] 2026-02-04 00:54:47.253705 | orchestrator | 2026-02-04 00:54:47.253709 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-04 00:54:47.253713 | orchestrator | Wednesday 04 February 2026 00:54:30 +0000 (0:00:00.825) 0:00:01.929 **** 2026-02-04 00:54:47.253717 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-04 00:54:47.253721 | orchestrator | 2026-02-04 00:54:47.253725 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-04 00:54:47.253728 | orchestrator | Wednesday 04 February 2026 00:54:31 +0000 (0:00:00.991) 0:00:02.920 **** 2026-02-04 00:54:47.253732 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:47.253736 | orchestrator | 2026-02-04 00:54:47.253740 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-04 00:54:47.253744 | orchestrator | Wednesday 04 February 2026 00:54:33 +0000 (0:00:02.048) 0:00:04.968 **** 2026-02-04 00:54:47.253748 | orchestrator | changed: [testbed-manager] 2026-02-04 00:54:47.253752 | orchestrator | 2026-02-04 00:54:47.253756 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-04 00:54:47.253760 | orchestrator | Wednesday 04 February 2026 00:54:34 +0000 (0:00:00.679) 0:00:05.648 **** 2026-02-04 00:54:47.253763 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 00:54:47.253767 | orchestrator | 2026-02-04 00:54:47.253771 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-04 00:54:47.253775 | orchestrator | Wednesday 04 February 2026 00:54:36 +0000 (0:00:02.219) 0:00:07.868 **** 2026-02-04 00:54:47.253779 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 00:54:47.253783 | orchestrator | 2026-02-04 00:54:47.253787 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-04 00:54:47.253791 | orchestrator | Wednesday 04 February 2026 00:54:37 +0000 (0:00:01.055) 0:00:08.923 **** 2026-02-04 00:54:47.253794 | orchestrator | ok: [testbed-manager] 2026-02-04 00:54:47.253798 | orchestrator | 2026-02-04 00:54:47.253802 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-04 00:54:47.253806 | orchestrator | Wednesday 04 February 2026 00:54:37 +0000 (0:00:00.485) 0:00:09.409 **** 2026-02-04 00:54:47.253810 | orchestrator | ok: [testbed-manager] 2026-02-04 00:54:47.253814 | orchestrator | 2026-02-04 00:54:47.253818 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:54:47.253822 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:54:47.253826 | orchestrator | 2026-02-04 00:54:47.253830 | orchestrator | 2026-02-04 00:54:47.253834 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:54:47.253837 | orchestrator | Wednesday 04 February 2026 00:54:38 +0000 (0:00:00.457) 0:00:09.866 **** 2026-02-04 00:54:47.253841 | orchestrator | =============================================================================== 2026-02-04 00:54:47.253845 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.22s 2026-02-04 00:54:47.253849 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.05s 2026-02-04 00:54:47.253853 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.05s 2026-02-04 00:54:47.253866 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.99s 2026-02-04 00:54:47.253871 | orchestrator | Get home directory of operator user ------------------------------------- 0.83s 2026-02-04 00:54:47.253874 | orchestrator | Create .kube directory -------------------------------------------------- 0.83s 2026-02-04 00:54:47.253882 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.68s 2026-02-04 00:54:47.253885 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.49s 2026-02-04 00:54:47.253889 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.46s 2026-02-04 00:54:47.253893 | orchestrator | 2026-02-04 00:54:47.253897 | orchestrator | 2026-02-04 00:54:47.253901 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-02-04 00:54:47.253905 | orchestrator | 2026-02-04 00:54:47.253909 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-04 00:54:47.253912 | orchestrator | Wednesday 04 February 2026 00:52:09 +0000 (0:00:00.117) 0:00:00.117 **** 2026-02-04 00:54:47.253916 | orchestrator | ok: [localhost] => { 2026-02-04 00:54:47.253921 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-02-04 00:54:47.253925 | orchestrator | } 2026-02-04 00:54:47.253930 | orchestrator | 2026-02-04 00:54:47.253936 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-02-04 00:54:47.253942 | orchestrator | Wednesday 04 February 2026 00:52:09 +0000 (0:00:00.217) 0:00:00.334 **** 2026-02-04 00:54:47.253949 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-02-04 00:54:47.253957 | orchestrator | ...ignoring 2026-02-04 00:54:47.253963 | orchestrator | 2026-02-04 00:54:47.253969 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-02-04 00:54:47.253975 | orchestrator | Wednesday 04 February 2026 00:52:14 +0000 (0:00:04.627) 0:00:04.962 **** 2026-02-04 00:54:47.253981 | orchestrator | skipping: [localhost] 2026-02-04 00:54:47.253986 | orchestrator | 2026-02-04 00:54:47.253993 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-02-04 00:54:47.253999 | orchestrator | Wednesday 04 February 2026 00:52:14 +0000 (0:00:00.300) 0:00:05.263 **** 2026-02-04 00:54:47.254005 | orchestrator | ok: [localhost] 2026-02-04 00:54:47.254074 | orchestrator | 2026-02-04 00:54:47.254083 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:54:47.254087 | orchestrator | 2026-02-04 00:54:47.254092 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:54:47.254097 | orchestrator | Wednesday 04 February 2026 00:52:15 +0000 (0:00:00.981) 0:00:06.244 **** 2026-02-04 00:54:47.254101 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:47.254106 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:47.254111 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:47.254116 | orchestrator | 2026-02-04 00:54:47.254120 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:54:47.254125 | orchestrator | Wednesday 04 February 2026 00:52:16 +0000 (0:00:01.228) 0:00:07.473 **** 2026-02-04 00:54:47.254129 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-04 00:54:47.254134 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-04 00:54:47.254138 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-04 00:54:47.254143 | orchestrator | 2026-02-04 00:54:47.254147 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-04 00:54:47.254152 | orchestrator | 2026-02-04 00:54:47.254156 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 00:54:47.254161 | orchestrator | Wednesday 04 February 2026 00:52:18 +0000 (0:00:01.521) 0:00:08.995 **** 2026-02-04 00:54:47.254180 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:54:47.254187 | orchestrator | 2026-02-04 00:54:47.254191 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-04 00:54:47.254196 | orchestrator | Wednesday 04 February 2026 00:52:19 +0000 (0:00:00.797) 0:00:09.793 **** 2026-02-04 00:54:47.254200 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:47.254205 | orchestrator | 2026-02-04 00:54:47.254209 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-04 00:54:47.254218 | orchestrator | Wednesday 04 February 2026 00:52:20 +0000 (0:00:01.516) 0:00:11.309 **** 2026-02-04 00:54:47.254222 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:47.254227 | orchestrator | 2026-02-04 00:54:47.254231 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-04 00:54:47.254236 | orchestrator | Wednesday 04 February 2026 00:52:21 +0000 (0:00:00.551) 0:00:11.860 **** 2026-02-04 00:54:47.254240 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:47.254245 | orchestrator | 2026-02-04 00:54:47.254249 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-04 00:54:47.254254 | orchestrator | Wednesday 04 February 2026 00:52:21 +0000 (0:00:00.534) 0:00:12.395 **** 2026-02-04 00:54:47.254258 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:47.254263 | orchestrator | 2026-02-04 00:54:47.254267 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-04 00:54:47.254271 | orchestrator | Wednesday 04 February 2026 00:52:22 +0000 (0:00:00.433) 0:00:12.828 **** 2026-02-04 00:54:47.254276 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:47.254280 | orchestrator | 2026-02-04 00:54:47.254285 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 00:54:47.254289 | orchestrator | Wednesday 04 February 2026 00:52:23 +0000 (0:00:01.099) 0:00:13.928 **** 2026-02-04 00:54:47.254294 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:54:47.254299 | orchestrator | 2026-02-04 00:54:47.254303 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-04 00:54:47.254313 | orchestrator | Wednesday 04 February 2026 00:52:24 +0000 (0:00:01.364) 0:00:15.292 **** 2026-02-04 00:54:47.254318 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:47.254323 | orchestrator | 2026-02-04 00:54:47.254327 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-04 00:54:47.254331 | orchestrator | Wednesday 04 February 2026 00:52:26 +0000 (0:00:01.355) 0:00:16.648 **** 2026-02-04 00:54:47.254336 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:47.254340 | orchestrator | 2026-02-04 00:54:47.254345 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-04 00:54:47.254350 | orchestrator | Wednesday 04 February 2026 00:52:26 +0000 (0:00:00.632) 0:00:17.280 **** 2026-02-04 00:54:47.254354 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:47.254358 | orchestrator | 2026-02-04 00:54:47.254363 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-04 00:54:47.254367 | orchestrator | Wednesday 04 February 2026 00:52:28 +0000 (0:00:01.888) 0:00:19.168 **** 2026-02-04 00:54:47.254375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:54:47.254384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:54:47.254398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:54:47.254408 | orchestrator | 2026-02-04 00:54:47.254416 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-04 00:54:47.254422 | orchestrator | Wednesday 04 February 2026 00:52:31 +0000 (0:00:02.481) 0:00:21.650 **** 2026-02-04 00:54:47.254477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:54:47.254495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:54:47.254532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:54:47.254537 | orchestrator | 2026-02-04 00:54:47.254541 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-04 00:54:47.254545 | orchestrator | Wednesday 04 February 2026 00:52:33 +0000 (0:00:02.671) 0:00:24.321 **** 2026-02-04 00:54:47.254549 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 00:54:47.254553 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 00:54:47.254557 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 00:54:47.254560 | orchestrator | 2026-02-04 00:54:47.254564 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-04 00:54:47.254568 | orchestrator | Wednesday 04 February 2026 00:52:37 +0000 (0:00:03.165) 0:00:27.487 **** 2026-02-04 00:54:47.254572 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 00:54:47.254576 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 00:54:47.254580 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 00:54:47.254583 | orchestrator | 2026-02-04 00:54:47.254587 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-04 00:54:47.254595 | orchestrator | Wednesday 04 February 2026 00:52:40 +0000 (0:00:03.687) 0:00:31.175 **** 2026-02-04 00:54:47.254600 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 00:54:47.254603 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 00:54:47.254607 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 00:54:47.254611 | orchestrator | 2026-02-04 00:54:47.254615 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-04 00:54:47.254619 | orchestrator | Wednesday 04 February 2026 00:52:42 +0000 (0:00:02.044) 0:00:33.219 **** 2026-02-04 00:54:47.254622 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 00:54:47.254626 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 00:54:47.254630 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 00:54:47.254634 | orchestrator | 2026-02-04 00:54:47.254638 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-04 00:54:47.254641 | orchestrator | Wednesday 04 February 2026 00:52:45 +0000 (0:00:02.747) 0:00:35.967 **** 2026-02-04 00:54:47.254652 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 00:54:47.254656 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 00:54:47.254660 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 00:54:47.254664 | orchestrator | 2026-02-04 00:54:47.254668 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-04 00:54:47.254672 | orchestrator | Wednesday 04 February 2026 00:52:48 +0000 (0:00:02.783) 0:00:38.750 **** 2026-02-04 00:54:47.254675 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 00:54:47.254679 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 00:54:47.254683 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 00:54:47.254687 | orchestrator | 2026-02-04 00:54:47.254691 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 00:54:47.254695 | orchestrator | Wednesday 04 February 2026 00:52:50 +0000 (0:00:02.391) 0:00:41.142 **** 2026-02-04 00:54:47.254698 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:47.254702 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:47.254706 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:47.254710 | orchestrator | 2026-02-04 00:54:47.254714 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-04 00:54:47.254718 | orchestrator | Wednesday 04 February 2026 00:52:51 +0000 (0:00:01.141) 0:00:42.283 **** 2026-02-04 00:54:47.254722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:54:47.254730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:54:47.254737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:54:47.254744 | orchestrator | 2026-02-04 00:54:47.254748 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-04 00:54:47.254752 | orchestrator | Wednesday 04 February 2026 00:52:53 +0000 (0:00:01.853) 0:00:44.136 **** 2026-02-04 00:54:47.254756 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:47.254760 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:47.254764 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:47.254768 | orchestrator | 2026-02-04 00:54:47.254772 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-04 00:54:47.254776 | orchestrator | Wednesday 04 February 2026 00:52:55 +0000 (0:00:01.502) 0:00:45.639 **** 2026-02-04 00:54:47.254779 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:47.254783 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:47.254787 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:47.254791 | orchestrator | 2026-02-04 00:54:47.254795 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-04 00:54:47.254799 | orchestrator | Wednesday 04 February 2026 00:53:03 +0000 (0:00:08.216) 0:00:53.856 **** 2026-02-04 00:54:47.254803 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:47.254807 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:47.254810 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:47.254814 | orchestrator | 2026-02-04 00:54:47.254818 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 00:54:47.254822 | orchestrator | 2026-02-04 00:54:47.254826 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 00:54:47.254830 | orchestrator | Wednesday 04 February 2026 00:53:03 +0000 (0:00:00.530) 0:00:54.387 **** 2026-02-04 00:54:47.254834 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:47.254838 | orchestrator | 2026-02-04 00:54:47.254841 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 00:54:47.254845 | orchestrator | Wednesday 04 February 2026 00:53:04 +0000 (0:00:00.727) 0:00:55.115 **** 2026-02-04 00:54:47.254849 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:47.254853 | orchestrator | 2026-02-04 00:54:47.254857 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 00:54:47.254860 | orchestrator | Wednesday 04 February 2026 00:53:05 +0000 (0:00:00.396) 0:00:55.512 **** 2026-02-04 00:54:47.254864 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:47.254868 | orchestrator | 2026-02-04 00:54:47.254872 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 00:54:47.254876 | orchestrator | Wednesday 04 February 2026 00:53:12 +0000 (0:00:07.359) 0:01:02.871 **** 2026-02-04 00:54:47.254879 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:47.254883 | orchestrator | 2026-02-04 00:54:47.254887 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 00:54:47.254891 | orchestrator | 2026-02-04 00:54:47.254895 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 00:54:47.254899 | orchestrator | Wednesday 04 February 2026 00:54:00 +0000 (0:00:48.164) 0:01:51.036 **** 2026-02-04 00:54:47.254906 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:47.254910 | orchestrator | 2026-02-04 00:54:47.254914 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 00:54:47.254918 | orchestrator | Wednesday 04 February 2026 00:54:01 +0000 (0:00:00.855) 0:01:51.891 **** 2026-02-04 00:54:47.254921 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:47.254925 | orchestrator | 2026-02-04 00:54:47.254929 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 00:54:47.254933 | orchestrator | Wednesday 04 February 2026 00:54:02 +0000 (0:00:00.869) 0:01:52.761 **** 2026-02-04 00:54:47.254938 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:47.254944 | orchestrator | 2026-02-04 00:54:47.254954 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 00:54:47.254961 | orchestrator | Wednesday 04 February 2026 00:54:05 +0000 (0:00:03.298) 0:01:56.060 **** 2026-02-04 00:54:47.254966 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:47.254972 | orchestrator | 2026-02-04 00:54:47.254978 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 00:54:47.254985 | orchestrator | 2026-02-04 00:54:47.254990 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 00:54:47.254994 | orchestrator | Wednesday 04 February 2026 00:54:20 +0000 (0:00:14.610) 0:02:10.670 **** 2026-02-04 00:54:47.254998 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:47.255002 | orchestrator | 2026-02-04 00:54:47.255009 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 00:54:47.255013 | orchestrator | Wednesday 04 February 2026 00:54:21 +0000 (0:00:00.863) 0:02:11.534 **** 2026-02-04 00:54:47.255017 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:47.255021 | orchestrator | 2026-02-04 00:54:47.255025 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 00:54:47.255029 | orchestrator | Wednesday 04 February 2026 00:54:21 +0000 (0:00:00.932) 0:02:12.466 **** 2026-02-04 00:54:47.255032 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:47.255036 | orchestrator | 2026-02-04 00:54:47.255040 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 00:54:47.255044 | orchestrator | Wednesday 04 February 2026 00:54:24 +0000 (0:00:02.081) 0:02:14.548 **** 2026-02-04 00:54:47.255047 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:47.255051 | orchestrator | 2026-02-04 00:54:47.255055 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-04 00:54:47.255059 | orchestrator | 2026-02-04 00:54:47.255063 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-04 00:54:47.255067 | orchestrator | Wednesday 04 February 2026 00:54:39 +0000 (0:00:15.385) 0:02:29.933 **** 2026-02-04 00:54:47.255070 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:54:47.255074 | orchestrator | 2026-02-04 00:54:47.255081 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-04 00:54:47.255085 | orchestrator | Wednesday 04 February 2026 00:54:40 +0000 (0:00:01.217) 0:02:31.150 **** 2026-02-04 00:54:47.255089 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:47.255093 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:47.255097 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:47.255101 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-04 00:54:47.255104 | orchestrator | enable_outward_rabbitmq_True 2026-02-04 00:54:47.255108 | orchestrator | 2026-02-04 00:54:47.255112 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-04 00:54:47.255116 | orchestrator | skipping: no hosts matched 2026-02-04 00:54:47.255120 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-04 00:54:47.255124 | orchestrator | outward_rabbitmq_restart 2026-02-04 00:54:47.255127 | orchestrator | 2026-02-04 00:54:47.255131 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-04 00:54:47.255135 | orchestrator | skipping: no hosts matched 2026-02-04 00:54:47.255139 | orchestrator | 2026-02-04 00:54:47.255149 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-04 00:54:47.255153 | orchestrator | skipping: no hosts matched 2026-02-04 00:54:47.255156 | orchestrator | 2026-02-04 00:54:47.255160 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:54:47.255164 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-04 00:54:47.255242 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-04 00:54:47.255246 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:54:47.255250 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:54:47.255253 | orchestrator | 2026-02-04 00:54:47.255257 | orchestrator | 2026-02-04 00:54:47.255261 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:54:47.255265 | orchestrator | Wednesday 04 February 2026 00:54:44 +0000 (0:00:03.536) 0:02:34.687 **** 2026-02-04 00:54:47.255269 | orchestrator | =============================================================================== 2026-02-04 00:54:47.255273 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.16s 2026-02-04 00:54:47.255276 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 12.74s 2026-02-04 00:54:47.255280 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.21s 2026-02-04 00:54:47.255284 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.63s 2026-02-04 00:54:47.255288 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.69s 2026-02-04 00:54:47.255292 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.54s 2026-02-04 00:54:47.255295 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.17s 2026-02-04 00:54:47.255299 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.78s 2026-02-04 00:54:47.255303 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.75s 2026-02-04 00:54:47.255307 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.67s 2026-02-04 00:54:47.255310 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.48s 2026-02-04 00:54:47.255314 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.45s 2026-02-04 00:54:47.255318 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.39s 2026-02-04 00:54:47.255322 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 2.20s 2026-02-04 00:54:47.255325 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.04s 2026-02-04 00:54:47.255329 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.89s 2026-02-04 00:54:47.255333 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.85s 2026-02-04 00:54:47.255340 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.52s 2026-02-04 00:54:47.255344 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.52s 2026-02-04 00:54:47.255348 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.50s 2026-02-04 00:54:47.255352 | orchestrator | 2026-02-04 00:54:47 | INFO  | Task 989a85c0-e4f7-4c9c-8bfd-ae78edd6b507 is in state SUCCESS 2026-02-04 00:54:47.255357 | orchestrator | 2026-02-04 00:54:47 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:47.255361 | orchestrator | 2026-02-04 00:54:47 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:47.255638 | orchestrator | 2026-02-04 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:50.298534 | orchestrator | 2026-02-04 00:54:50 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:50.301441 | orchestrator | 2026-02-04 00:54:50 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:50.302409 | orchestrator | 2026-02-04 00:54:50 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:50.302454 | orchestrator | 2026-02-04 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:53.352457 | orchestrator | 2026-02-04 00:54:53 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:53.355141 | orchestrator | 2026-02-04 00:54:53 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:53.358237 | orchestrator | 2026-02-04 00:54:53 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:53.358299 | orchestrator | 2026-02-04 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:56.419140 | orchestrator | 2026-02-04 00:54:56 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:56.419240 | orchestrator | 2026-02-04 00:54:56 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:56.419455 | orchestrator | 2026-02-04 00:54:56 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:56.421029 | orchestrator | 2026-02-04 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:59.480084 | orchestrator | 2026-02-04 00:54:59 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:54:59.480229 | orchestrator | 2026-02-04 00:54:59 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:54:59.480243 | orchestrator | 2026-02-04 00:54:59 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:54:59.480252 | orchestrator | 2026-02-04 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:02.545123 | orchestrator | 2026-02-04 00:55:02 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:02.548693 | orchestrator | 2026-02-04 00:55:02 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:02.551485 | orchestrator | 2026-02-04 00:55:02 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:02.551565 | orchestrator | 2026-02-04 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:05.610704 | orchestrator | 2026-02-04 00:55:05 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:05.612531 | orchestrator | 2026-02-04 00:55:05 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:05.616082 | orchestrator | 2026-02-04 00:55:05 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:05.616360 | orchestrator | 2026-02-04 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:08.667994 | orchestrator | 2026-02-04 00:55:08 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:08.668640 | orchestrator | 2026-02-04 00:55:08 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:08.671770 | orchestrator | 2026-02-04 00:55:08 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:08.671828 | orchestrator | 2026-02-04 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:11.726331 | orchestrator | 2026-02-04 00:55:11 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:11.727156 | orchestrator | 2026-02-04 00:55:11 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:11.731011 | orchestrator | 2026-02-04 00:55:11 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:11.731502 | orchestrator | 2026-02-04 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:14.768669 | orchestrator | 2026-02-04 00:55:14 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:14.768743 | orchestrator | 2026-02-04 00:55:14 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:14.768749 | orchestrator | 2026-02-04 00:55:14 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:14.768754 | orchestrator | 2026-02-04 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:17.811253 | orchestrator | 2026-02-04 00:55:17 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:17.813961 | orchestrator | 2026-02-04 00:55:17 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:17.814857 | orchestrator | 2026-02-04 00:55:17 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:17.815022 | orchestrator | 2026-02-04 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:20.861844 | orchestrator | 2026-02-04 00:55:20 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:20.862355 | orchestrator | 2026-02-04 00:55:20 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:20.863368 | orchestrator | 2026-02-04 00:55:20 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:20.863424 | orchestrator | 2026-02-04 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:23.904426 | orchestrator | 2026-02-04 00:55:23 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:23.904923 | orchestrator | 2026-02-04 00:55:23 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:23.907043 | orchestrator | 2026-02-04 00:55:23 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:23.907146 | orchestrator | 2026-02-04 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:26.949314 | orchestrator | 2026-02-04 00:55:26 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:26.949757 | orchestrator | 2026-02-04 00:55:26 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:26.950705 | orchestrator | 2026-02-04 00:55:26 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:26.950744 | orchestrator | 2026-02-04 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:29.995879 | orchestrator | 2026-02-04 00:55:29 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:29.998144 | orchestrator | 2026-02-04 00:55:29 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:30.000423 | orchestrator | 2026-02-04 00:55:30 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:30.000524 | orchestrator | 2026-02-04 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:33.051825 | orchestrator | 2026-02-04 00:55:33 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:33.052497 | orchestrator | 2026-02-04 00:55:33 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:33.054071 | orchestrator | 2026-02-04 00:55:33 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:33.054142 | orchestrator | 2026-02-04 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:36.134254 | orchestrator | 2026-02-04 00:55:36 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:36.137187 | orchestrator | 2026-02-04 00:55:36 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:36.140255 | orchestrator | 2026-02-04 00:55:36 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:36.140384 | orchestrator | 2026-02-04 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:39.208018 | orchestrator | 2026-02-04 00:55:39 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:39.210961 | orchestrator | 2026-02-04 00:55:39 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:39.211130 | orchestrator | 2026-02-04 00:55:39 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:39.211145 | orchestrator | 2026-02-04 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:42.257461 | orchestrator | 2026-02-04 00:55:42 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:42.260224 | orchestrator | 2026-02-04 00:55:42 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:42.260933 | orchestrator | 2026-02-04 00:55:42 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:42.260961 | orchestrator | 2026-02-04 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:45.323602 | orchestrator | 2026-02-04 00:55:45 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:45.324348 | orchestrator | 2026-02-04 00:55:45 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:45.325846 | orchestrator | 2026-02-04 00:55:45 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:45.325902 | orchestrator | 2026-02-04 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:48.372444 | orchestrator | 2026-02-04 00:55:48 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:48.376151 | orchestrator | 2026-02-04 00:55:48 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state STARTED 2026-02-04 00:55:48.377298 | orchestrator | 2026-02-04 00:55:48 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:48.377346 | orchestrator | 2026-02-04 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:51.434726 | orchestrator | 2026-02-04 00:55:51 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:51.436174 | orchestrator | 2026-02-04 00:55:51 | INFO  | Task 7700df81-1273-4fd3-ae37-f13a7e14c535 is in state SUCCESS 2026-02-04 00:55:51.438401 | orchestrator | 2026-02-04 00:55:51.438457 | orchestrator | 2026-02-04 00:55:51.438463 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:55:51.438469 | orchestrator | 2026-02-04 00:55:51.438474 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:55:51.438479 | orchestrator | Wednesday 04 February 2026 00:53:12 +0000 (0:00:00.233) 0:00:00.233 **** 2026-02-04 00:55:51.438501 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.438508 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.438527 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.438532 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:51.438535 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:51.438540 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:51.438543 | orchestrator | 2026-02-04 00:55:51.438548 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:55:51.438552 | orchestrator | Wednesday 04 February 2026 00:53:13 +0000 (0:00:01.092) 0:00:01.325 **** 2026-02-04 00:55:51.438556 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-04 00:55:51.438562 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-04 00:55:51.438568 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-04 00:55:51.438574 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-04 00:55:51.438579 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-04 00:55:51.438584 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-04 00:55:51.438589 | orchestrator | 2026-02-04 00:55:51.438598 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-04 00:55:51.438605 | orchestrator | 2026-02-04 00:55:51.438612 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-04 00:55:51.438618 | orchestrator | Wednesday 04 February 2026 00:53:14 +0000 (0:00:01.725) 0:00:03.050 **** 2026-02-04 00:55:51.438672 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:51.438682 | orchestrator | 2026-02-04 00:55:51.438688 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-04 00:55:51.438694 | orchestrator | Wednesday 04 February 2026 00:53:16 +0000 (0:00:01.820) 0:00:04.871 **** 2026-02-04 00:55:51.438703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438800 | orchestrator | 2026-02-04 00:55:51.438821 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-04 00:55:51.438827 | orchestrator | Wednesday 04 February 2026 00:53:18 +0000 (0:00:01.593) 0:00:06.465 **** 2026-02-04 00:55:51.438834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438871 | orchestrator | 2026-02-04 00:55:51.438877 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-04 00:55:51.438882 | orchestrator | Wednesday 04 February 2026 00:53:20 +0000 (0:00:02.202) 0:00:08.667 **** 2026-02-04 00:55:51.438893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438943 | orchestrator | 2026-02-04 00:55:51.438950 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-04 00:55:51.438956 | orchestrator | Wednesday 04 February 2026 00:53:22 +0000 (0:00:02.120) 0:00:10.788 **** 2026-02-04 00:55:51.438962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.438988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.439002 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.439010 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.439017 | orchestrator | 2026-02-04 00:55:51.439028 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-04 00:55:51.439036 | orchestrator | Wednesday 04 February 2026 00:53:24 +0000 (0:00:01.920) 0:00:12.709 **** 2026-02-04 00:55:51.439043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.439050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.439080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.439087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.439094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.439102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.439116 | orchestrator | 2026-02-04 00:55:51.439123 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-04 00:55:51.439129 | orchestrator | Wednesday 04 February 2026 00:53:26 +0000 (0:00:01.818) 0:00:14.527 **** 2026-02-04 00:55:51.439136 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:51.439143 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:51.439150 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:51.439158 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:51.439164 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:51.439171 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:51.439178 | orchestrator | 2026-02-04 00:55:51.439185 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-04 00:55:51.439195 | orchestrator | Wednesday 04 February 2026 00:53:29 +0000 (0:00:02.938) 0:00:17.466 **** 2026-02-04 00:55:51.439202 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-04 00:55:51.439208 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-04 00:55:51.439214 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-04 00:55:51.439220 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-04 00:55:51.439226 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-04 00:55:51.439232 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-04 00:55:51.439238 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:55:51.439244 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:55:51.439255 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:55:51.439261 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:55:51.439268 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:55:51.439274 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:55:51.439280 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:55:51.439288 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:55:51.439294 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:55:51.439300 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:55:51.439306 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:55:51.439313 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:55:51.439319 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:55:51.439327 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:55:51.439334 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:55:51.439346 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:55:51.439353 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:55:51.439360 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:55:51.439366 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:55:51.439372 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:55:51.439378 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:55:51.439385 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:55:51.439392 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:55:51.439399 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:55:51.439406 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:55:51.439412 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:55:51.439419 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:55:51.439425 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:55:51.439432 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:55:51.439439 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:55:51.439446 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 00:55:51.439457 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 00:55:51.439464 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 00:55:51.439471 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 00:55:51.439478 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 00:55:51.439485 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 00:55:51.439491 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-04 00:55:51.439499 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-04 00:55:51.439511 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-04 00:55:51.439519 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-04 00:55:51.439525 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-04 00:55:51.439533 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-04 00:55:51.439540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 00:55:51.439547 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 00:55:51.439559 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 00:55:51.439566 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 00:55:51.439573 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 00:55:51.439579 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 00:55:51.439586 | orchestrator | 2026-02-04 00:55:51.439593 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:55:51.439601 | orchestrator | Wednesday 04 February 2026 00:53:49 +0000 (0:00:20.529) 0:00:37.995 **** 2026-02-04 00:55:51.439608 | orchestrator | 2026-02-04 00:55:51.439615 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:55:51.439621 | orchestrator | Wednesday 04 February 2026 00:53:50 +0000 (0:00:00.557) 0:00:38.554 **** 2026-02-04 00:55:51.439627 | orchestrator | 2026-02-04 00:55:51.439635 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:55:51.439642 | orchestrator | Wednesday 04 February 2026 00:53:50 +0000 (0:00:00.152) 0:00:38.707 **** 2026-02-04 00:55:51.439648 | orchestrator | 2026-02-04 00:55:51.439654 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:55:51.439661 | orchestrator | Wednesday 04 February 2026 00:53:50 +0000 (0:00:00.078) 0:00:38.785 **** 2026-02-04 00:55:51.439668 | orchestrator | 2026-02-04 00:55:51.439675 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:55:51.439682 | orchestrator | Wednesday 04 February 2026 00:53:50 +0000 (0:00:00.079) 0:00:38.865 **** 2026-02-04 00:55:51.439688 | orchestrator | 2026-02-04 00:55:51.439695 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:55:51.439702 | orchestrator | Wednesday 04 February 2026 00:53:50 +0000 (0:00:00.175) 0:00:39.040 **** 2026-02-04 00:55:51.439708 | orchestrator | 2026-02-04 00:55:51.439716 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-04 00:55:51.439723 | orchestrator | Wednesday 04 February 2026 00:53:51 +0000 (0:00:00.179) 0:00:39.220 **** 2026-02-04 00:55:51.439730 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.439737 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.439744 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:51.439751 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.439758 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:51.439765 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:51.439772 | orchestrator | 2026-02-04 00:55:51.439779 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-04 00:55:51.439786 | orchestrator | Wednesday 04 February 2026 00:53:53 +0000 (0:00:02.347) 0:00:41.567 **** 2026-02-04 00:55:51.439793 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:51.439800 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:51.439808 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:51.439815 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:51.439821 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:51.439828 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:51.439835 | orchestrator | 2026-02-04 00:55:51.439842 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-04 00:55:51.439849 | orchestrator | 2026-02-04 00:55:51.439856 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 00:55:51.439864 | orchestrator | Wednesday 04 February 2026 00:54:20 +0000 (0:00:27.378) 0:01:08.946 **** 2026-02-04 00:55:51.439875 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:51.439882 | orchestrator | 2026-02-04 00:55:51.439888 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 00:55:51.439907 | orchestrator | Wednesday 04 February 2026 00:54:22 +0000 (0:00:01.845) 0:01:10.791 **** 2026-02-04 00:55:51.439914 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:51.439921 | orchestrator | 2026-02-04 00:55:51.439928 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-04 00:55:51.439936 | orchestrator | Wednesday 04 February 2026 00:54:23 +0000 (0:00:00.723) 0:01:11.514 **** 2026-02-04 00:55:51.439943 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.439949 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.439956 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.439964 | orchestrator | 2026-02-04 00:55:51.439971 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-04 00:55:51.439977 | orchestrator | Wednesday 04 February 2026 00:54:25 +0000 (0:00:02.126) 0:01:13.640 **** 2026-02-04 00:55:51.439984 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.439991 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.439998 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.440011 | orchestrator | 2026-02-04 00:55:51.440018 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-04 00:55:51.440025 | orchestrator | Wednesday 04 February 2026 00:54:25 +0000 (0:00:00.450) 0:01:14.091 **** 2026-02-04 00:55:51.440032 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.440039 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.440045 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.440052 | orchestrator | 2026-02-04 00:55:51.440094 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-04 00:55:51.440100 | orchestrator | Wednesday 04 February 2026 00:54:26 +0000 (0:00:00.419) 0:01:14.510 **** 2026-02-04 00:55:51.440106 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.440112 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.440117 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.440123 | orchestrator | 2026-02-04 00:55:51.440130 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-04 00:55:51.440137 | orchestrator | Wednesday 04 February 2026 00:54:26 +0000 (0:00:00.494) 0:01:15.005 **** 2026-02-04 00:55:51.440143 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.440150 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.440157 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.440163 | orchestrator | 2026-02-04 00:55:51.440170 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-04 00:55:51.440178 | orchestrator | Wednesday 04 February 2026 00:54:28 +0000 (0:00:02.011) 0:01:17.017 **** 2026-02-04 00:55:51.440184 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440192 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440198 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440205 | orchestrator | 2026-02-04 00:55:51.440212 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-04 00:55:51.440218 | orchestrator | Wednesday 04 February 2026 00:54:29 +0000 (0:00:00.494) 0:01:17.511 **** 2026-02-04 00:55:51.440225 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440232 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440239 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440246 | orchestrator | 2026-02-04 00:55:51.440253 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-04 00:55:51.440260 | orchestrator | Wednesday 04 February 2026 00:54:30 +0000 (0:00:00.880) 0:01:18.391 **** 2026-02-04 00:55:51.440267 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440273 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440280 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440286 | orchestrator | 2026-02-04 00:55:51.440293 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-04 00:55:51.440299 | orchestrator | Wednesday 04 February 2026 00:54:31 +0000 (0:00:00.773) 0:01:19.165 **** 2026-02-04 00:55:51.440306 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440330 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440338 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440345 | orchestrator | 2026-02-04 00:55:51.440351 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-04 00:55:51.440357 | orchestrator | Wednesday 04 February 2026 00:54:31 +0000 (0:00:00.964) 0:01:20.129 **** 2026-02-04 00:55:51.440363 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440370 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440375 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440381 | orchestrator | 2026-02-04 00:55:51.440388 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-04 00:55:51.440394 | orchestrator | Wednesday 04 February 2026 00:54:32 +0000 (0:00:00.486) 0:01:20.616 **** 2026-02-04 00:55:51.440400 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440406 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440413 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440419 | orchestrator | 2026-02-04 00:55:51.440425 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-04 00:55:51.440431 | orchestrator | Wednesday 04 February 2026 00:54:32 +0000 (0:00:00.394) 0:01:21.011 **** 2026-02-04 00:55:51.440437 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440443 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440449 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440456 | orchestrator | 2026-02-04 00:55:51.440462 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-04 00:55:51.440468 | orchestrator | Wednesday 04 February 2026 00:54:33 +0000 (0:00:00.365) 0:01:21.376 **** 2026-02-04 00:55:51.440475 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440481 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440488 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440494 | orchestrator | 2026-02-04 00:55:51.440500 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-04 00:55:51.440506 | orchestrator | Wednesday 04 February 2026 00:54:33 +0000 (0:00:00.677) 0:01:22.054 **** 2026-02-04 00:55:51.440513 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440526 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440533 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440539 | orchestrator | 2026-02-04 00:55:51.440545 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-04 00:55:51.440552 | orchestrator | Wednesday 04 February 2026 00:54:34 +0000 (0:00:00.366) 0:01:22.421 **** 2026-02-04 00:55:51.440558 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440565 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440571 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440578 | orchestrator | 2026-02-04 00:55:51.440584 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-04 00:55:51.440591 | orchestrator | Wednesday 04 February 2026 00:54:34 +0000 (0:00:00.388) 0:01:22.809 **** 2026-02-04 00:55:51.440598 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440604 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440611 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440617 | orchestrator | 2026-02-04 00:55:51.440624 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-04 00:55:51.440631 | orchestrator | Wednesday 04 February 2026 00:54:35 +0000 (0:00:00.363) 0:01:23.173 **** 2026-02-04 00:55:51.440637 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440644 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440659 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440667 | orchestrator | 2026-02-04 00:55:51.440673 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 00:55:51.440680 | orchestrator | Wednesday 04 February 2026 00:54:35 +0000 (0:00:00.388) 0:01:23.561 **** 2026-02-04 00:55:51.440686 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:51.440700 | orchestrator | 2026-02-04 00:55:51.440707 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-04 00:55:51.440713 | orchestrator | Wednesday 04 February 2026 00:54:36 +0000 (0:00:01.109) 0:01:24.671 **** 2026-02-04 00:55:51.440720 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.440726 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.440733 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.440739 | orchestrator | 2026-02-04 00:55:51.440746 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-04 00:55:51.440752 | orchestrator | Wednesday 04 February 2026 00:54:37 +0000 (0:00:00.569) 0:01:25.241 **** 2026-02-04 00:55:51.440759 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.440766 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.440772 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.440778 | orchestrator | 2026-02-04 00:55:51.440785 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-04 00:55:51.440792 | orchestrator | Wednesday 04 February 2026 00:54:37 +0000 (0:00:00.674) 0:01:25.915 **** 2026-02-04 00:55:51.440799 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440806 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440812 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440819 | orchestrator | 2026-02-04 00:55:51.440825 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-04 00:55:51.440832 | orchestrator | Wednesday 04 February 2026 00:54:38 +0000 (0:00:00.947) 0:01:26.863 **** 2026-02-04 00:55:51.440838 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440844 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440851 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440858 | orchestrator | 2026-02-04 00:55:51.440864 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-04 00:55:51.440871 | orchestrator | Wednesday 04 February 2026 00:54:39 +0000 (0:00:00.628) 0:01:27.491 **** 2026-02-04 00:55:51.440877 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440884 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440891 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440898 | orchestrator | 2026-02-04 00:55:51.440905 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-04 00:55:51.440911 | orchestrator | Wednesday 04 February 2026 00:54:39 +0000 (0:00:00.589) 0:01:28.081 **** 2026-02-04 00:55:51.440918 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440925 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440931 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440938 | orchestrator | 2026-02-04 00:55:51.440945 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-04 00:55:51.440952 | orchestrator | Wednesday 04 February 2026 00:54:40 +0000 (0:00:00.878) 0:01:28.959 **** 2026-02-04 00:55:51.440958 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.440964 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.440971 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.440978 | orchestrator | 2026-02-04 00:55:51.440985 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-04 00:55:51.440991 | orchestrator | Wednesday 04 February 2026 00:54:41 +0000 (0:00:00.958) 0:01:29.917 **** 2026-02-04 00:55:51.440998 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.441005 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.441011 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.441018 | orchestrator | 2026-02-04 00:55:51.441025 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-04 00:55:51.441031 | orchestrator | Wednesday 04 February 2026 00:54:42 +0000 (0:00:00.375) 0:01:30.292 **** 2026-02-04 00:55:51.441038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441475 | orchestrator | 2026-02-04 00:55:51.441515 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-04 00:55:51.441525 | orchestrator | Wednesday 04 February 2026 00:54:43 +0000 (0:00:01.733) 0:01:32.026 **** 2026-02-04 00:55:51.441532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441613 | orchestrator | 2026-02-04 00:55:51.441619 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-04 00:55:51.441625 | orchestrator | Wednesday 04 February 2026 00:54:48 +0000 (0:00:05.092) 0:01:37.119 **** 2026-02-04 00:55:51.441633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.441705 | orchestrator | 2026-02-04 00:55:51.441711 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:55:51.441717 | orchestrator | Wednesday 04 February 2026 00:54:51 +0000 (0:00:02.704) 0:01:39.823 **** 2026-02-04 00:55:51.441723 | orchestrator | 2026-02-04 00:55:51.441729 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:55:51.441734 | orchestrator | Wednesday 04 February 2026 00:54:51 +0000 (0:00:00.080) 0:01:39.904 **** 2026-02-04 00:55:51.441740 | orchestrator | 2026-02-04 00:55:51.441746 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:55:51.441757 | orchestrator | Wednesday 04 February 2026 00:54:51 +0000 (0:00:00.081) 0:01:39.985 **** 2026-02-04 00:55:51.441764 | orchestrator | 2026-02-04 00:55:51.441770 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-04 00:55:51.441776 | orchestrator | Wednesday 04 February 2026 00:54:51 +0000 (0:00:00.083) 0:01:40.068 **** 2026-02-04 00:55:51.441783 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:51.441789 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:51.441795 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:51.441802 | orchestrator | 2026-02-04 00:55:51.441808 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-04 00:55:51.441814 | orchestrator | Wednesday 04 February 2026 00:54:59 +0000 (0:00:07.867) 0:01:47.935 **** 2026-02-04 00:55:51.441820 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:51.441826 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:51.441832 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:51.441838 | orchestrator | 2026-02-04 00:55:51.441843 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-04 00:55:51.441849 | orchestrator | Wednesday 04 February 2026 00:55:08 +0000 (0:00:08.325) 0:01:56.260 **** 2026-02-04 00:55:51.441856 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:51.441862 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:51.441868 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:51.441875 | orchestrator | 2026-02-04 00:55:51.441881 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-04 00:55:51.441887 | orchestrator | Wednesday 04 February 2026 00:55:11 +0000 (0:00:03.010) 0:01:59.271 **** 2026-02-04 00:55:51.441894 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.441900 | orchestrator | 2026-02-04 00:55:51.441906 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-04 00:55:51.441913 | orchestrator | Wednesday 04 February 2026 00:55:11 +0000 (0:00:00.818) 0:02:00.090 **** 2026-02-04 00:55:51.441919 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.441926 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.441932 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.441938 | orchestrator | 2026-02-04 00:55:51.441944 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-04 00:55:51.441951 | orchestrator | Wednesday 04 February 2026 00:55:13 +0000 (0:00:01.403) 0:02:01.493 **** 2026-02-04 00:55:51.441961 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.441967 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.441973 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:51.441980 | orchestrator | 2026-02-04 00:55:51.441986 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-04 00:55:51.441993 | orchestrator | Wednesday 04 February 2026 00:55:14 +0000 (0:00:01.052) 0:02:02.546 **** 2026-02-04 00:55:51.442155 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.442164 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.442172 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.442179 | orchestrator | 2026-02-04 00:55:51.442185 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-04 00:55:51.442192 | orchestrator | Wednesday 04 February 2026 00:55:15 +0000 (0:00:01.016) 0:02:03.563 **** 2026-02-04 00:55:51.442199 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.442206 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.442213 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:51.442219 | orchestrator | 2026-02-04 00:55:51.442226 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-04 00:55:51.442234 | orchestrator | Wednesday 04 February 2026 00:55:16 +0000 (0:00:00.983) 0:02:04.546 **** 2026-02-04 00:55:51.442240 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.442247 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.442263 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.442270 | orchestrator | 2026-02-04 00:55:51.442277 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-04 00:55:51.442292 | orchestrator | Wednesday 04 February 2026 00:55:17 +0000 (0:00:01.266) 0:02:05.813 **** 2026-02-04 00:55:51.442299 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.442306 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.442312 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.442318 | orchestrator | 2026-02-04 00:55:51.442324 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-04 00:55:51.442330 | orchestrator | Wednesday 04 February 2026 00:55:18 +0000 (0:00:00.847) 0:02:06.661 **** 2026-02-04 00:55:51.442337 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.442344 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.442351 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.442358 | orchestrator | 2026-02-04 00:55:51.442364 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-04 00:55:51.442371 | orchestrator | Wednesday 04 February 2026 00:55:18 +0000 (0:00:00.392) 0:02:07.053 **** 2026-02-04 00:55:51.442379 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442388 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442395 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442403 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442412 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442419 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442431 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442436 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442453 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442457 | orchestrator | 2026-02-04 00:55:51.442461 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-04 00:55:51.442465 | orchestrator | Wednesday 04 February 2026 00:55:20 +0000 (0:00:01.585) 0:02:08.639 **** 2026-02-04 00:55:51.442470 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442474 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442478 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442482 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442495 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442517 | orchestrator | 2026-02-04 00:55:51.442521 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-04 00:55:51.442525 | orchestrator | Wednesday 04 February 2026 00:55:24 +0000 (0:00:04.415) 0:02:13.054 **** 2026-02-04 00:55:51.442532 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442536 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442540 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442544 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442561 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:55:51.442576 | orchestrator | 2026-02-04 00:55:51.442580 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:55:51.442584 | orchestrator | Wednesday 04 February 2026 00:55:28 +0000 (0:00:03.145) 0:02:16.200 **** 2026-02-04 00:55:51.442588 | orchestrator | 2026-02-04 00:55:51.442593 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:55:51.442597 | orchestrator | Wednesday 04 February 2026 00:55:28 +0000 (0:00:00.189) 0:02:16.390 **** 2026-02-04 00:55:51.442601 | orchestrator | 2026-02-04 00:55:51.442604 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:55:51.442609 | orchestrator | Wednesday 04 February 2026 00:55:28 +0000 (0:00:00.173) 0:02:16.564 **** 2026-02-04 00:55:51.442613 | orchestrator | 2026-02-04 00:55:51.442616 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-04 00:55:51.442620 | orchestrator | Wednesday 04 February 2026 00:55:28 +0000 (0:00:00.168) 0:02:16.732 **** 2026-02-04 00:55:51.442624 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:51.442628 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:51.442632 | orchestrator | 2026-02-04 00:55:51.442639 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-04 00:55:51.442643 | orchestrator | Wednesday 04 February 2026 00:55:35 +0000 (0:00:06.431) 0:02:23.164 **** 2026-02-04 00:55:51.442647 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:51.442651 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:51.442655 | orchestrator | 2026-02-04 00:55:51.442659 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-04 00:55:51.442663 | orchestrator | Wednesday 04 February 2026 00:55:41 +0000 (0:00:06.416) 0:02:29.581 **** 2026-02-04 00:55:51.442667 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:51.442671 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:51.442675 | orchestrator | 2026-02-04 00:55:51.442679 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-04 00:55:51.442683 | orchestrator | Wednesday 04 February 2026 00:55:43 +0000 (0:00:02.023) 0:02:31.604 **** 2026-02-04 00:55:51.442687 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:51.442691 | orchestrator | 2026-02-04 00:55:51.442695 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-04 00:55:51.442699 | orchestrator | Wednesday 04 February 2026 00:55:43 +0000 (0:00:00.234) 0:02:31.838 **** 2026-02-04 00:55:51.442703 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.442707 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.442710 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.442714 | orchestrator | 2026-02-04 00:55:51.442718 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-04 00:55:51.442722 | orchestrator | Wednesday 04 February 2026 00:55:44 +0000 (0:00:00.919) 0:02:32.758 **** 2026-02-04 00:55:51.442726 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.442730 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.442734 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:51.442738 | orchestrator | 2026-02-04 00:55:51.442742 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-04 00:55:51.442746 | orchestrator | Wednesday 04 February 2026 00:55:45 +0000 (0:00:00.694) 0:02:33.453 **** 2026-02-04 00:55:51.442750 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.442754 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.442758 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.442761 | orchestrator | 2026-02-04 00:55:51.442770 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-04 00:55:51.442774 | orchestrator | Wednesday 04 February 2026 00:55:46 +0000 (0:00:00.853) 0:02:34.306 **** 2026-02-04 00:55:51.442778 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:51.442782 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:51.442786 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:51.442790 | orchestrator | 2026-02-04 00:55:51.442794 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-04 00:55:51.442798 | orchestrator | Wednesday 04 February 2026 00:55:46 +0000 (0:00:00.762) 0:02:35.069 **** 2026-02-04 00:55:51.442802 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.442806 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.442810 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.442814 | orchestrator | 2026-02-04 00:55:51.442818 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-04 00:55:51.442822 | orchestrator | Wednesday 04 February 2026 00:55:47 +0000 (0:00:00.848) 0:02:35.918 **** 2026-02-04 00:55:51.442826 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:51.442830 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:51.442834 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:51.442838 | orchestrator | 2026-02-04 00:55:51.442842 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:55:51.442846 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-04 00:55:51.442850 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-04 00:55:51.442855 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-04 00:55:51.442859 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:55:51.442863 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:55:51.442870 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:55:51.442874 | orchestrator | 2026-02-04 00:55:51.442878 | orchestrator | 2026-02-04 00:55:51.442882 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:55:51.442885 | orchestrator | Wednesday 04 February 2026 00:55:48 +0000 (0:00:01.039) 0:02:36.958 **** 2026-02-04 00:55:51.442889 | orchestrator | =============================================================================== 2026-02-04 00:55:51.442893 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 27.38s 2026-02-04 00:55:51.442897 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.53s 2026-02-04 00:55:51.442901 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.74s 2026-02-04 00:55:51.442905 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.30s 2026-02-04 00:55:51.442909 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.09s 2026-02-04 00:55:51.442913 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 5.03s 2026-02-04 00:55:51.442917 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.42s 2026-02-04 00:55:51.442923 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.15s 2026-02-04 00:55:51.442927 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.94s 2026-02-04 00:55:51.442931 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.70s 2026-02-04 00:55:51.442934 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.35s 2026-02-04 00:55:51.442944 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.20s 2026-02-04 00:55:51.442948 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 2.13s 2026-02-04 00:55:51.442952 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.12s 2026-02-04 00:55:51.442956 | orchestrator | ovn-db : Establish whether the OVN SB cluster has already existed ------- 2.01s 2026-02-04 00:55:51.442960 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.92s 2026-02-04 00:55:51.442964 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.85s 2026-02-04 00:55:51.442968 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.82s 2026-02-04 00:55:51.442972 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.82s 2026-02-04 00:55:51.442975 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.73s 2026-02-04 00:55:51.442980 | orchestrator | 2026-02-04 00:55:51 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:51.442984 | orchestrator | 2026-02-04 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:54.501927 | orchestrator | 2026-02-04 00:55:54 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:54.502049 | orchestrator | 2026-02-04 00:55:54 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:54.502101 | orchestrator | 2026-02-04 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:57.569632 | orchestrator | 2026-02-04 00:55:57 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:55:57.571484 | orchestrator | 2026-02-04 00:55:57 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:55:57.571546 | orchestrator | 2026-02-04 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:00.636038 | orchestrator | 2026-02-04 00:56:00 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:00.638432 | orchestrator | 2026-02-04 00:56:00 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:00.638510 | orchestrator | 2026-02-04 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:03.689810 | orchestrator | 2026-02-04 00:56:03 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:03.691063 | orchestrator | 2026-02-04 00:56:03 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:03.691132 | orchestrator | 2026-02-04 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:06.765918 | orchestrator | 2026-02-04 00:56:06 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:06.768186 | orchestrator | 2026-02-04 00:56:06 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:06.768270 | orchestrator | 2026-02-04 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:09.822328 | orchestrator | 2026-02-04 00:56:09 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:09.824492 | orchestrator | 2026-02-04 00:56:09 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:09.824571 | orchestrator | 2026-02-04 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:12.892505 | orchestrator | 2026-02-04 00:56:12 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:12.893992 | orchestrator | 2026-02-04 00:56:12 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:12.894129 | orchestrator | 2026-02-04 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:15.938446 | orchestrator | 2026-02-04 00:56:15 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:15.941305 | orchestrator | 2026-02-04 00:56:15 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:15.941892 | orchestrator | 2026-02-04 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:18.996872 | orchestrator | 2026-02-04 00:56:18 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:19.000222 | orchestrator | 2026-02-04 00:56:19 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:19.000337 | orchestrator | 2026-02-04 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:22.052748 | orchestrator | 2026-02-04 00:56:22 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:22.055895 | orchestrator | 2026-02-04 00:56:22 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:22.055953 | orchestrator | 2026-02-04 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:25.095572 | orchestrator | 2026-02-04 00:56:25 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:25.096711 | orchestrator | 2026-02-04 00:56:25 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:25.096764 | orchestrator | 2026-02-04 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:28.148889 | orchestrator | 2026-02-04 00:56:28 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:28.151228 | orchestrator | 2026-02-04 00:56:28 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:28.151648 | orchestrator | 2026-02-04 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:31.199621 | orchestrator | 2026-02-04 00:56:31 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:31.200836 | orchestrator | 2026-02-04 00:56:31 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:31.200914 | orchestrator | 2026-02-04 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:34.245720 | orchestrator | 2026-02-04 00:56:34 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:34.245975 | orchestrator | 2026-02-04 00:56:34 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:34.246008 | orchestrator | 2026-02-04 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:37.297901 | orchestrator | 2026-02-04 00:56:37 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:37.299398 | orchestrator | 2026-02-04 00:56:37 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:37.299494 | orchestrator | 2026-02-04 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:40.360565 | orchestrator | 2026-02-04 00:56:40 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:40.363107 | orchestrator | 2026-02-04 00:56:40 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:40.363152 | orchestrator | 2026-02-04 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:43.409076 | orchestrator | 2026-02-04 00:56:43 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:43.411912 | orchestrator | 2026-02-04 00:56:43 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:43.412060 | orchestrator | 2026-02-04 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:46.468586 | orchestrator | 2026-02-04 00:56:46 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:46.469627 | orchestrator | 2026-02-04 00:56:46 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:46.469657 | orchestrator | 2026-02-04 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:49.523874 | orchestrator | 2026-02-04 00:56:49 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:49.527808 | orchestrator | 2026-02-04 00:56:49 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:49.527862 | orchestrator | 2026-02-04 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:52.570067 | orchestrator | 2026-02-04 00:56:52 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:52.570604 | orchestrator | 2026-02-04 00:56:52 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:52.570645 | orchestrator | 2026-02-04 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:55.605091 | orchestrator | 2026-02-04 00:56:55 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:55.609353 | orchestrator | 2026-02-04 00:56:55 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:55.609402 | orchestrator | 2026-02-04 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:58.653331 | orchestrator | 2026-02-04 00:56:58 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:56:58.654371 | orchestrator | 2026-02-04 00:56:58 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:56:58.654419 | orchestrator | 2026-02-04 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:01.716348 | orchestrator | 2026-02-04 00:57:01 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:01.718851 | orchestrator | 2026-02-04 00:57:01 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:01.719475 | orchestrator | 2026-02-04 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:04.756520 | orchestrator | 2026-02-04 00:57:04 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:04.756592 | orchestrator | 2026-02-04 00:57:04 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:04.756598 | orchestrator | 2026-02-04 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:07.809244 | orchestrator | 2026-02-04 00:57:07 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:07.812009 | orchestrator | 2026-02-04 00:57:07 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:07.812478 | orchestrator | 2026-02-04 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:10.856183 | orchestrator | 2026-02-04 00:57:10 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:10.857469 | orchestrator | 2026-02-04 00:57:10 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:10.857656 | orchestrator | 2026-02-04 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:13.904170 | orchestrator | 2026-02-04 00:57:13 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:13.907877 | orchestrator | 2026-02-04 00:57:13 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:13.907979 | orchestrator | 2026-02-04 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:16.941399 | orchestrator | 2026-02-04 00:57:16 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:16.943088 | orchestrator | 2026-02-04 00:57:16 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:16.943152 | orchestrator | 2026-02-04 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:19.997084 | orchestrator | 2026-02-04 00:57:19 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:19.998328 | orchestrator | 2026-02-04 00:57:20 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:19.998389 | orchestrator | 2026-02-04 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:23.045401 | orchestrator | 2026-02-04 00:57:23 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:23.047232 | orchestrator | 2026-02-04 00:57:23 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:23.047281 | orchestrator | 2026-02-04 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:26.091278 | orchestrator | 2026-02-04 00:57:26 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:26.092337 | orchestrator | 2026-02-04 00:57:26 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:26.092387 | orchestrator | 2026-02-04 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:29.149779 | orchestrator | 2026-02-04 00:57:29 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:29.151144 | orchestrator | 2026-02-04 00:57:29 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:29.151181 | orchestrator | 2026-02-04 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:32.197915 | orchestrator | 2026-02-04 00:57:32 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:32.200041 | orchestrator | 2026-02-04 00:57:32 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:32.200102 | orchestrator | 2026-02-04 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:35.253739 | orchestrator | 2026-02-04 00:57:35 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:35.255465 | orchestrator | 2026-02-04 00:57:35 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:35.255575 | orchestrator | 2026-02-04 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:38.299135 | orchestrator | 2026-02-04 00:57:38 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:38.299199 | orchestrator | 2026-02-04 00:57:38 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:38.299208 | orchestrator | 2026-02-04 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:41.352208 | orchestrator | 2026-02-04 00:57:41 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:41.354442 | orchestrator | 2026-02-04 00:57:41 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:41.354554 | orchestrator | 2026-02-04 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:44.406759 | orchestrator | 2026-02-04 00:57:44 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:44.408846 | orchestrator | 2026-02-04 00:57:44 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:44.408929 | orchestrator | 2026-02-04 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:47.464555 | orchestrator | 2026-02-04 00:57:47 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:47.467086 | orchestrator | 2026-02-04 00:57:47 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:47.467198 | orchestrator | 2026-02-04 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:50.522503 | orchestrator | 2026-02-04 00:57:50 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:50.522728 | orchestrator | 2026-02-04 00:57:50 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:50.523230 | orchestrator | 2026-02-04 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:53.563377 | orchestrator | 2026-02-04 00:57:53 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:53.565162 | orchestrator | 2026-02-04 00:57:53 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:53.565219 | orchestrator | 2026-02-04 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:56.619724 | orchestrator | 2026-02-04 00:57:56 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:56.620448 | orchestrator | 2026-02-04 00:57:56 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:56.621749 | orchestrator | 2026-02-04 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:59.673883 | orchestrator | 2026-02-04 00:57:59 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:57:59.675229 | orchestrator | 2026-02-04 00:57:59 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:57:59.676757 | orchestrator | 2026-02-04 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:02.717573 | orchestrator | 2026-02-04 00:58:02 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:02.720083 | orchestrator | 2026-02-04 00:58:02 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:02.721372 | orchestrator | 2026-02-04 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:05.778667 | orchestrator | 2026-02-04 00:58:05 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:05.780619 | orchestrator | 2026-02-04 00:58:05 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:05.780727 | orchestrator | 2026-02-04 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:08.830647 | orchestrator | 2026-02-04 00:58:08 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:08.832945 | orchestrator | 2026-02-04 00:58:08 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:08.833020 | orchestrator | 2026-02-04 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:11.879164 | orchestrator | 2026-02-04 00:58:11 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:11.880155 | orchestrator | 2026-02-04 00:58:11 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:11.880379 | orchestrator | 2026-02-04 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:14.937630 | orchestrator | 2026-02-04 00:58:14 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:14.938544 | orchestrator | 2026-02-04 00:58:14 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:14.939800 | orchestrator | 2026-02-04 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:17.989064 | orchestrator | 2026-02-04 00:58:17 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:17.990598 | orchestrator | 2026-02-04 00:58:17 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:17.990728 | orchestrator | 2026-02-04 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:21.051941 | orchestrator | 2026-02-04 00:58:21 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:21.056110 | orchestrator | 2026-02-04 00:58:21 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:21.056197 | orchestrator | 2026-02-04 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:24.125045 | orchestrator | 2026-02-04 00:58:24 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:24.129010 | orchestrator | 2026-02-04 00:58:24 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:24.129077 | orchestrator | 2026-02-04 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:27.179344 | orchestrator | 2026-02-04 00:58:27 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:27.179429 | orchestrator | 2026-02-04 00:58:27 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:27.179439 | orchestrator | 2026-02-04 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:30.224359 | orchestrator | 2026-02-04 00:58:30 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:30.226330 | orchestrator | 2026-02-04 00:58:30 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:30.226391 | orchestrator | 2026-02-04 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:33.269645 | orchestrator | 2026-02-04 00:58:33 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:33.272638 | orchestrator | 2026-02-04 00:58:33 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:33.272704 | orchestrator | 2026-02-04 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:36.333524 | orchestrator | 2026-02-04 00:58:36 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:36.334202 | orchestrator | 2026-02-04 00:58:36 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:36.334238 | orchestrator | 2026-02-04 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:39.400703 | orchestrator | 2026-02-04 00:58:39 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:39.401934 | orchestrator | 2026-02-04 00:58:39 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:39.401987 | orchestrator | 2026-02-04 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:42.446922 | orchestrator | 2026-02-04 00:58:42 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:42.448385 | orchestrator | 2026-02-04 00:58:42 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:42.448435 | orchestrator | 2026-02-04 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:45.494293 | orchestrator | 2026-02-04 00:58:45 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:45.499059 | orchestrator | 2026-02-04 00:58:45 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:45.499138 | orchestrator | 2026-02-04 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:48.545707 | orchestrator | 2026-02-04 00:58:48 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:48.549222 | orchestrator | 2026-02-04 00:58:48 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:48.550090 | orchestrator | 2026-02-04 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:51.594956 | orchestrator | 2026-02-04 00:58:51 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:51.596269 | orchestrator | 2026-02-04 00:58:51 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:51.597383 | orchestrator | 2026-02-04 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:54.640822 | orchestrator | 2026-02-04 00:58:54 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:54.641015 | orchestrator | 2026-02-04 00:58:54 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:54.641032 | orchestrator | 2026-02-04 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:57.686908 | orchestrator | 2026-02-04 00:58:57 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:58:57.688351 | orchestrator | 2026-02-04 00:58:57 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:58:57.688403 | orchestrator | 2026-02-04 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:00.743916 | orchestrator | 2026-02-04 00:59:00 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:00.744928 | orchestrator | 2026-02-04 00:59:00 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:00.745008 | orchestrator | 2026-02-04 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:03.782276 | orchestrator | 2026-02-04 00:59:03 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:03.783800 | orchestrator | 2026-02-04 00:59:03 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:03.783854 | orchestrator | 2026-02-04 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:06.827869 | orchestrator | 2026-02-04 00:59:06 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:06.828408 | orchestrator | 2026-02-04 00:59:06 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:06.828439 | orchestrator | 2026-02-04 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:09.881861 | orchestrator | 2026-02-04 00:59:09 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:09.881927 | orchestrator | 2026-02-04 00:59:09 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:09.881958 | orchestrator | 2026-02-04 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:12.942561 | orchestrator | 2026-02-04 00:59:12 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:12.943951 | orchestrator | 2026-02-04 00:59:12 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:12.943999 | orchestrator | 2026-02-04 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:15.996456 | orchestrator | 2026-02-04 00:59:15 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:16.000036 | orchestrator | 2026-02-04 00:59:16 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:16.000105 | orchestrator | 2026-02-04 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:19.057158 | orchestrator | 2026-02-04 00:59:19 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:19.058407 | orchestrator | 2026-02-04 00:59:19 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:19.058452 | orchestrator | 2026-02-04 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:22.102908 | orchestrator | 2026-02-04 00:59:22 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:22.102977 | orchestrator | 2026-02-04 00:59:22 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:22.102984 | orchestrator | 2026-02-04 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:25.160335 | orchestrator | 2026-02-04 00:59:25 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:25.164472 | orchestrator | 2026-02-04 00:59:25 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:25.164558 | orchestrator | 2026-02-04 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:28.197424 | orchestrator | 2026-02-04 00:59:28 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:28.198937 | orchestrator | 2026-02-04 00:59:28 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:28.199409 | orchestrator | 2026-02-04 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:31.254883 | orchestrator | 2026-02-04 00:59:31 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:31.257051 | orchestrator | 2026-02-04 00:59:31 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:31.257105 | orchestrator | 2026-02-04 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:34.300751 | orchestrator | 2026-02-04 00:59:34 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:34.301394 | orchestrator | 2026-02-04 00:59:34 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:34.301436 | orchestrator | 2026-02-04 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:37.352904 | orchestrator | 2026-02-04 00:59:37 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state STARTED 2026-02-04 00:59:37.353999 | orchestrator | 2026-02-04 00:59:37 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:37.354075 | orchestrator | 2026-02-04 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:40.429640 | orchestrator | 2026-02-04 00:59:40 | INFO  | Task d30e904e-f5ed-49cd-95ad-3c4d6f37bdb8 is in state SUCCESS 2026-02-04 00:59:40.433350 | orchestrator | 2026-02-04 00:59:40.433437 | orchestrator | 2026-02-04 00:59:40.433444 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:59:40.433450 | orchestrator | 2026-02-04 00:59:40.433455 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:59:40.433462 | orchestrator | Wednesday 04 February 2026 00:51:45 +0000 (0:00:00.646) 0:00:00.646 **** 2026-02-04 00:59:40.433468 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.433477 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.433486 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.433493 | orchestrator | 2026-02-04 00:59:40.433498 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:59:40.433505 | orchestrator | Wednesday 04 February 2026 00:51:46 +0000 (0:00:00.752) 0:00:01.399 **** 2026-02-04 00:59:40.433512 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-04 00:59:40.433518 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-04 00:59:40.433524 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-04 00:59:40.433531 | orchestrator | 2026-02-04 00:59:40.433537 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-04 00:59:40.433542 | orchestrator | 2026-02-04 00:59:40.433548 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-04 00:59:40.433554 | orchestrator | Wednesday 04 February 2026 00:51:46 +0000 (0:00:00.685) 0:00:02.085 **** 2026-02-04 00:59:40.433560 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.433567 | orchestrator | 2026-02-04 00:59:40.433573 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-04 00:59:40.433579 | orchestrator | Wednesday 04 February 2026 00:51:47 +0000 (0:00:00.963) 0:00:03.048 **** 2026-02-04 00:59:40.433585 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.433591 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.433597 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.433602 | orchestrator | 2026-02-04 00:59:40.433608 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-04 00:59:40.433614 | orchestrator | Wednesday 04 February 2026 00:51:48 +0000 (0:00:00.894) 0:00:03.943 **** 2026-02-04 00:59:40.433620 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.433625 | orchestrator | 2026-02-04 00:59:40.433645 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-04 00:59:40.433652 | orchestrator | Wednesday 04 February 2026 00:51:49 +0000 (0:00:01.162) 0:00:05.105 **** 2026-02-04 00:59:40.433661 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.433669 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.433675 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.433682 | orchestrator | 2026-02-04 00:59:40.433688 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-04 00:59:40.433721 | orchestrator | Wednesday 04 February 2026 00:51:50 +0000 (0:00:00.783) 0:00:05.889 **** 2026-02-04 00:59:40.433728 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:59:40.433735 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:59:40.433741 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:59:40.433747 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:59:40.433753 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:59:40.433759 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 00:59:40.433767 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 00:59:40.433774 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 00:59:40.433823 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:59:40.433832 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 00:59:40.433838 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 00:59:40.433845 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 00:59:40.433877 | orchestrator | 2026-02-04 00:59:40.433884 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-04 00:59:40.433937 | orchestrator | Wednesday 04 February 2026 00:51:54 +0000 (0:00:04.014) 0:00:09.904 **** 2026-02-04 00:59:40.433985 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-04 00:59:40.434098 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-04 00:59:40.434107 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-04 00:59:40.434114 | orchestrator | 2026-02-04 00:59:40.434120 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-04 00:59:40.434127 | orchestrator | Wednesday 04 February 2026 00:51:55 +0000 (0:00:01.409) 0:00:11.313 **** 2026-02-04 00:59:40.434133 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-04 00:59:40.434140 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-04 00:59:40.434146 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-04 00:59:40.434152 | orchestrator | 2026-02-04 00:59:40.434158 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-04 00:59:40.434166 | orchestrator | Wednesday 04 February 2026 00:51:58 +0000 (0:00:02.760) 0:00:14.074 **** 2026-02-04 00:59:40.434173 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-04 00:59:40.434180 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.434210 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-04 00:59:40.434217 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.434223 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-04 00:59:40.434229 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.434235 | orchestrator | 2026-02-04 00:59:40.434241 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-04 00:59:40.434247 | orchestrator | Wednesday 04 February 2026 00:52:01 +0000 (0:00:02.782) 0:00:16.857 **** 2026-02-04 00:59:40.434256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.434348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.434354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.434361 | orchestrator | 2026-02-04 00:59:40.434370 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-04 00:59:40.434381 | orchestrator | Wednesday 04 February 2026 00:52:04 +0000 (0:00:02.789) 0:00:19.647 **** 2026-02-04 00:59:40.434388 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.434395 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.434401 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.434407 | orchestrator | 2026-02-04 00:59:40.434414 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-04 00:59:40.434419 | orchestrator | Wednesday 04 February 2026 00:52:07 +0000 (0:00:03.431) 0:00:23.078 **** 2026-02-04 00:59:40.434425 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-04 00:59:40.434431 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-04 00:59:40.434436 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-04 00:59:40.434442 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-04 00:59:40.434447 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-04 00:59:40.434453 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-04 00:59:40.434458 | orchestrator | 2026-02-04 00:59:40.434464 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-04 00:59:40.434470 | orchestrator | Wednesday 04 February 2026 00:52:10 +0000 (0:00:03.121) 0:00:26.200 **** 2026-02-04 00:59:40.434475 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.434481 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.434486 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.434492 | orchestrator | 2026-02-04 00:59:40.434498 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-04 00:59:40.434504 | orchestrator | Wednesday 04 February 2026 00:52:12 +0000 (0:00:02.090) 0:00:28.290 **** 2026-02-04 00:59:40.434511 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.434517 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.434523 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.434528 | orchestrator | 2026-02-04 00:59:40.434533 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-04 00:59:40.434538 | orchestrator | Wednesday 04 February 2026 00:52:17 +0000 (0:00:04.455) 0:00:32.746 **** 2026-02-04 00:59:40.434544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.434609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.434620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.434634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:59:40.434641 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.434660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.434667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.434721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.434729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:59:40.434736 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.434751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.434763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.434773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.434819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:59:40.434826 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.434832 | orchestrator | 2026-02-04 00:59:40.434839 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-04 00:59:40.434845 | orchestrator | Wednesday 04 February 2026 00:52:18 +0000 (0:00:01.361) 0:00:34.108 **** 2026-02-04 00:59:40.434851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.434908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.434915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:59:40.434923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:59:40.434935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.434962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad', '__omit_place_holder__9ca3fe6d0296f9e537e91af577864625829615ad'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:59:40.434969 | orchestrator | 2026-02-04 00:59:40.434975 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-04 00:59:40.434982 | orchestrator | Wednesday 04 February 2026 00:52:22 +0000 (0:00:04.169) 0:00:38.278 **** 2026-02-04 00:59:40.434988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.434994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.435001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.435013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.435024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.435031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.435042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.435049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.435057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.435064 | orchestrator | 2026-02-04 00:59:40.435070 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-04 00:59:40.435076 | orchestrator | Wednesday 04 February 2026 00:52:27 +0000 (0:00:04.736) 0:00:43.014 **** 2026-02-04 00:59:40.435083 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 00:59:40.435090 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 00:59:40.435096 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 00:59:40.435107 | orchestrator | 2026-02-04 00:59:40.435114 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-04 00:59:40.435120 | orchestrator | Wednesday 04 February 2026 00:52:32 +0000 (0:00:04.391) 0:00:47.406 **** 2026-02-04 00:59:40.435127 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 00:59:40.435134 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 00:59:40.435140 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 00:59:40.435147 | orchestrator | 2026-02-04 00:59:40.435162 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-04 00:59:40.435170 | orchestrator | Wednesday 04 February 2026 00:52:38 +0000 (0:00:06.669) 0:00:54.075 **** 2026-02-04 00:59:40.435176 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.435182 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.435188 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.435194 | orchestrator | 2026-02-04 00:59:40.435200 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-04 00:59:40.435206 | orchestrator | Wednesday 04 February 2026 00:52:40 +0000 (0:00:01.349) 0:00:55.425 **** 2026-02-04 00:59:40.435212 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 00:59:40.435219 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 00:59:40.435225 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 00:59:40.435231 | orchestrator | 2026-02-04 00:59:40.435237 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-04 00:59:40.435243 | orchestrator | Wednesday 04 February 2026 00:52:43 +0000 (0:00:03.725) 0:00:59.150 **** 2026-02-04 00:59:40.435250 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 00:59:40.435257 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 00:59:40.435263 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 00:59:40.435270 | orchestrator | 2026-02-04 00:59:40.435274 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-04 00:59:40.435277 | orchestrator | Wednesday 04 February 2026 00:52:47 +0000 (0:00:04.102) 0:01:03.253 **** 2026-02-04 00:59:40.435282 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-04 00:59:40.435290 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-04 00:59:40.435294 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-04 00:59:40.435298 | orchestrator | 2026-02-04 00:59:40.435302 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-04 00:59:40.435306 | orchestrator | Wednesday 04 February 2026 00:52:50 +0000 (0:00:02.833) 0:01:06.087 **** 2026-02-04 00:59:40.435310 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-04 00:59:40.435314 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-04 00:59:40.435318 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-04 00:59:40.435321 | orchestrator | 2026-02-04 00:59:40.435325 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-04 00:59:40.435329 | orchestrator | Wednesday 04 February 2026 00:52:53 +0000 (0:00:02.329) 0:01:08.417 **** 2026-02-04 00:59:40.435333 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.435343 | orchestrator | 2026-02-04 00:59:40.435347 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-04 00:59:40.435351 | orchestrator | Wednesday 04 February 2026 00:52:54 +0000 (0:00:01.285) 0:01:09.702 **** 2026-02-04 00:59:40.435355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.435360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.435368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.435372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.435378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.435388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.435398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.435404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.435410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.435417 | orchestrator | 2026-02-04 00:59:40.435423 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-04 00:59:40.435429 | orchestrator | Wednesday 04 February 2026 00:52:59 +0000 (0:00:05.630) 0:01:15.334 **** 2026-02-04 00:59:40.435438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435455 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.435459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435475 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.435479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435600 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.435604 | orchestrator | 2026-02-04 00:59:40.435608 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-04 00:59:40.435612 | orchestrator | Wednesday 04 February 2026 00:53:00 +0000 (0:00:00.771) 0:01:16.105 **** 2026-02-04 00:59:40.435619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435635 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.435639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435656 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.435660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435678 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.435682 | orchestrator | 2026-02-04 00:59:40.435686 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-04 00:59:40.435690 | orchestrator | Wednesday 04 February 2026 00:53:01 +0000 (0:00:01.020) 0:01:17.126 **** 2026-02-04 00:59:40.435771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435789 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.435793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435815 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.435819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435835 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.435839 | orchestrator | 2026-02-04 00:59:40.435843 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-04 00:59:40.435847 | orchestrator | Wednesday 04 February 2026 00:53:03 +0000 (0:00:01.596) 0:01:18.722 **** 2026-02-04 00:59:40.435851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435869 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.435873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435885 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.435893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435912 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.435916 | orchestrator | 2026-02-04 00:59:40.435920 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-04 00:59:40.435924 | orchestrator | Wednesday 04 February 2026 00:53:04 +0000 (0:00:00.923) 0:01:19.646 **** 2026-02-04 00:59:40.435928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.435932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.435936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.435940 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.436103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.436114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.436118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.436122 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.436129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.436133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.436137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.436141 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.436145 | orchestrator | 2026-02-04 00:59:40.436149 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-04 00:59:40.436153 | orchestrator | Wednesday 04 February 2026 00:53:05 +0000 (0:00:01.415) 0:01:21.061 **** 2026-02-04 00:59:40.436157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.436168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.436176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.436182 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.436192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.436203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.436209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.436218 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.436224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.436233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.436245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.436251 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.436257 | orchestrator | 2026-02-04 00:59:40.436263 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-04 00:59:40.436269 | orchestrator | Wednesday 04 February 2026 00:53:07 +0000 (0:00:01.366) 0:01:22.427 **** 2026-02-04 00:59:40.436274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.436284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.436291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.436297 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.436303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.436309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.436326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.436333 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.436339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.436345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.436354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.436361 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.436367 | orchestrator | 2026-02-04 00:59:40.436373 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-04 00:59:40.436381 | orchestrator | Wednesday 04 February 2026 00:53:07 +0000 (0:00:00.877) 0:01:23.305 **** 2026-02-04 00:59:40.436386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.436393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.436398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.436402 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.436408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.436413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.436420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.436424 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.436429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:59:40.436433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:59:40.436444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:59:40.436448 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.436452 | orchestrator | 2026-02-04 00:59:40.436456 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-04 00:59:40.436459 | orchestrator | Wednesday 04 February 2026 00:53:09 +0000 (0:00:01.143) 0:01:24.449 **** 2026-02-04 00:59:40.436465 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 00:59:40.436472 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 00:59:40.436480 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 00:59:40.436490 | orchestrator | 2026-02-04 00:59:40.436499 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-04 00:59:40.436504 | orchestrator | Wednesday 04 February 2026 00:53:10 +0000 (0:00:01.889) 0:01:26.338 **** 2026-02-04 00:59:40.436510 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 00:59:40.436516 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 00:59:40.436522 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 00:59:40.436528 | orchestrator | 2026-02-04 00:59:40.436534 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-04 00:59:40.436540 | orchestrator | Wednesday 04 February 2026 00:53:12 +0000 (0:00:01.867) 0:01:28.206 **** 2026-02-04 00:59:40.436545 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 00:59:40.436551 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 00:59:40.436556 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 00:59:40.436563 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.436568 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 00:59:40.436574 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 00:59:40.436580 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.436586 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 00:59:40.436592 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.436598 | orchestrator | 2026-02-04 00:59:40.436604 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-04 00:59:40.436614 | orchestrator | Wednesday 04 February 2026 00:53:14 +0000 (0:00:01.843) 0:01:30.050 **** 2026-02-04 00:59:40.436621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.436633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.436641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 00:59:40.436653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.436658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.436662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:59:40.436666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.436691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.436735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:59:40.436740 | orchestrator | 2026-02-04 00:59:40.436744 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-04 00:59:40.436749 | orchestrator | Wednesday 04 February 2026 00:53:17 +0000 (0:00:03.248) 0:01:33.298 **** 2026-02-04 00:59:40.436753 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.436758 | orchestrator | 2026-02-04 00:59:40.436762 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-04 00:59:40.436767 | orchestrator | Wednesday 04 February 2026 00:53:18 +0000 (0:00:00.764) 0:01:34.063 **** 2026-02-04 00:59:40.436773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 00:59:40.436782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.436787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.436795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.436803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 00:59:40.436808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.436812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.437688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.437788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 00:59:40.437798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.437809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.437813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.437817 | orchestrator | 2026-02-04 00:59:40.437822 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-04 00:59:40.437827 | orchestrator | Wednesday 04 February 2026 00:53:24 +0000 (0:00:05.391) 0:01:39.454 **** 2026-02-04 00:59:40.437831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 00:59:40.437843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.437847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.437853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.437858 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.437864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 00:59:40.437868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.437872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.437877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.437881 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.437888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 00:59:40.437895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.437901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.437905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.437909 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.437950 | orchestrator | 2026-02-04 00:59:40.437990 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-04 00:59:40.437994 | orchestrator | Wednesday 04 February 2026 00:53:25 +0000 (0:00:01.634) 0:01:41.088 **** 2026-02-04 00:59:40.437999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:59:40.438004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:59:40.438010 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.438085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:59:40.438090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:59:40.438094 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.438098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:59:40.438102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:59:40.438106 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.438111 | orchestrator | 2026-02-04 00:59:40.438119 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-04 00:59:40.438123 | orchestrator | Wednesday 04 February 2026 00:53:26 +0000 (0:00:01.260) 0:01:42.349 **** 2026-02-04 00:59:40.438131 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.438135 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.438139 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.438143 | orchestrator | 2026-02-04 00:59:40.438147 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-04 00:59:40.438151 | orchestrator | Wednesday 04 February 2026 00:53:28 +0000 (0:00:01.681) 0:01:44.030 **** 2026-02-04 00:59:40.438155 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.438158 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.438162 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.438166 | orchestrator | 2026-02-04 00:59:40.438170 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-04 00:59:40.438174 | orchestrator | Wednesday 04 February 2026 00:53:32 +0000 (0:00:03.761) 0:01:47.792 **** 2026-02-04 00:59:40.438178 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.438181 | orchestrator | 2026-02-04 00:59:40.438186 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-04 00:59:40.438192 | orchestrator | Wednesday 04 February 2026 00:53:33 +0000 (0:00:01.079) 0:01:48.871 **** 2026-02-04 00:59:40.438202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.438211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.438243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.438259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438276 | orchestrator | 2026-02-04 00:59:40.438282 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-04 00:59:40.438292 | orchestrator | Wednesday 04 February 2026 00:53:37 +0000 (0:00:03.909) 0:01:52.781 **** 2026-02-04 00:59:40.438304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.438312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438328 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.438336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.438342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438360 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.438371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.438377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.438388 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.438393 | orchestrator | 2026-02-04 00:59:40.438397 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-04 00:59:40.438402 | orchestrator | Wednesday 04 February 2026 00:53:38 +0000 (0:00:00.658) 0:01:53.439 **** 2026-02-04 00:59:40.438407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:59:40.438413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:59:40.438417 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.438423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:59:40.438438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:59:40.438444 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.438449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:59:40.438456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:59:40.438462 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.438468 | orchestrator | 2026-02-04 00:59:40.438474 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-04 00:59:40.438480 | orchestrator | Wednesday 04 February 2026 00:53:39 +0000 (0:00:01.283) 0:01:54.722 **** 2026-02-04 00:59:40.438486 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.438492 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.438498 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.438504 | orchestrator | 2026-02-04 00:59:40.438510 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-04 00:59:40.438516 | orchestrator | Wednesday 04 February 2026 00:53:40 +0000 (0:00:01.395) 0:01:56.118 **** 2026-02-04 00:59:40.438522 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.438529 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.438535 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.438540 | orchestrator | 2026-02-04 00:59:40.438550 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-04 00:59:40.438557 | orchestrator | Wednesday 04 February 2026 00:53:43 +0000 (0:00:02.403) 0:01:58.521 **** 2026-02-04 00:59:40.438564 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.438570 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.438588 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.438594 | orchestrator | 2026-02-04 00:59:40.438608 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-04 00:59:40.438614 | orchestrator | Wednesday 04 February 2026 00:53:43 +0000 (0:00:00.421) 0:01:58.943 **** 2026-02-04 00:59:40.438620 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.438626 | orchestrator | 2026-02-04 00:59:40.438632 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-04 00:59:40.438637 | orchestrator | Wednesday 04 February 2026 00:53:44 +0000 (0:00:01.235) 0:02:00.178 **** 2026-02-04 00:59:40.438644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 00:59:40.438657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 00:59:40.438669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 00:59:40.438676 | orchestrator | 2026-02-04 00:59:40.438683 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-04 00:59:40.438689 | orchestrator | Wednesday 04 February 2026 00:53:48 +0000 (0:00:04.030) 0:02:04.209 **** 2026-02-04 00:59:40.438865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 00:59:40.438875 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.438883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 00:59:40.438889 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.438900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 00:59:40.438913 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.438919 | orchestrator | 2026-02-04 00:59:40.438924 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-04 00:59:40.438930 | orchestrator | Wednesday 04 February 2026 00:53:52 +0000 (0:00:03.541) 0:02:07.750 **** 2026-02-04 00:59:40.438937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:59:40.438946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:59:40.438953 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.439146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:59:40.439164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:59:40.439179 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.439206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:59:40.439222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:59:40.439238 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.439252 | orchestrator | 2026-02-04 00:59:40.439264 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-04 00:59:40.439279 | orchestrator | Wednesday 04 February 2026 00:53:55 +0000 (0:00:03.390) 0:02:11.141 **** 2026-02-04 00:59:40.439292 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.439304 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.439318 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.439332 | orchestrator | 2026-02-04 00:59:40.439347 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-04 00:59:40.439362 | orchestrator | Wednesday 04 February 2026 00:53:57 +0000 (0:00:01.506) 0:02:12.648 **** 2026-02-04 00:59:40.439393 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.439407 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.439421 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.439434 | orchestrator | 2026-02-04 00:59:40.439450 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-04 00:59:40.439463 | orchestrator | Wednesday 04 February 2026 00:53:59 +0000 (0:00:02.399) 0:02:15.047 **** 2026-02-04 00:59:40.439469 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.439475 | orchestrator | 2026-02-04 00:59:40.439481 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-04 00:59:40.439487 | orchestrator | Wednesday 04 February 2026 00:54:00 +0000 (0:00:01.267) 0:02:16.315 **** 2026-02-04 00:59:40.439502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.439511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.439627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.439844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439877 | orchestrator | 2026-02-04 00:59:40.439885 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-04 00:59:40.439891 | orchestrator | Wednesday 04 February 2026 00:54:08 +0000 (0:00:07.311) 0:02:23.627 **** 2026-02-04 00:59:40.439897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.439903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439932 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.439941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.439947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439967 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.439977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.439989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.439996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440004 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.440008 | orchestrator | 2026-02-04 00:59:40.440012 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-04 00:59:40.440016 | orchestrator | Wednesday 04 February 2026 00:54:09 +0000 (0:00:01.723) 0:02:25.350 **** 2026-02-04 00:59:40.440020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:59:40.440025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:59:40.440030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:59:40.440034 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.440038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:59:40.440048 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.440052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:59:40.440059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:59:40.440063 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.440066 | orchestrator | 2026-02-04 00:59:40.440070 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-04 00:59:40.440074 | orchestrator | Wednesday 04 February 2026 00:54:11 +0000 (0:00:01.691) 0:02:27.042 **** 2026-02-04 00:59:40.440078 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.440082 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.440086 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.440090 | orchestrator | 2026-02-04 00:59:40.440094 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-04 00:59:40.440097 | orchestrator | Wednesday 04 February 2026 00:54:13 +0000 (0:00:01.325) 0:02:28.368 **** 2026-02-04 00:59:40.440101 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.440105 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.440109 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.440113 | orchestrator | 2026-02-04 00:59:40.440116 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-04 00:59:40.440120 | orchestrator | Wednesday 04 February 2026 00:54:15 +0000 (0:00:02.149) 0:02:30.517 **** 2026-02-04 00:59:40.440124 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.440128 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.440132 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.440135 | orchestrator | 2026-02-04 00:59:40.440139 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-04 00:59:40.440143 | orchestrator | Wednesday 04 February 2026 00:54:15 +0000 (0:00:00.577) 0:02:31.095 **** 2026-02-04 00:59:40.440147 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.440151 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.440154 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.440158 | orchestrator | 2026-02-04 00:59:40.440162 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-04 00:59:40.440166 | orchestrator | Wednesday 04 February 2026 00:54:16 +0000 (0:00:00.336) 0:02:31.432 **** 2026-02-04 00:59:40.440172 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.440176 | orchestrator | 2026-02-04 00:59:40.440180 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-04 00:59:40.440184 | orchestrator | Wednesday 04 February 2026 00:54:17 +0000 (0:00:01.162) 0:02:32.594 **** 2026-02-04 00:59:40.440189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 00:59:40.440197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:59:40.440201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 00:59:40.440235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:59:40.440869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 00:59:40.440926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:59:40.440936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440962 | orchestrator | 2026-02-04 00:59:40.440966 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-04 00:59:40.440970 | orchestrator | Wednesday 04 February 2026 00:54:26 +0000 (0:00:09.700) 0:02:42.295 **** 2026-02-04 00:59:40.440975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 00:59:40.440982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:59:40.440986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 00:59:40.440992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.440999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:59:40.441003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441095 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.441112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441124 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.441131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 00:59:40.441136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:59:40.441141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.441167 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.441171 | orchestrator | 2026-02-04 00:59:40.441175 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-04 00:59:40.441179 | orchestrator | Wednesday 04 February 2026 00:54:29 +0000 (0:00:02.207) 0:02:44.502 **** 2026-02-04 00:59:40.441184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:59:40.441188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:59:40.441192 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.441196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:59:40.441200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:59:40.441207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:59:40.441211 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.441217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:59:40.441221 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.441225 | orchestrator | 2026-02-04 00:59:40.441229 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-04 00:59:40.441233 | orchestrator | Wednesday 04 February 2026 00:54:30 +0000 (0:00:01.819) 0:02:46.321 **** 2026-02-04 00:59:40.441237 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.441240 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.441244 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.441248 | orchestrator | 2026-02-04 00:59:40.441252 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-04 00:59:40.441256 | orchestrator | Wednesday 04 February 2026 00:54:33 +0000 (0:00:02.575) 0:02:48.897 **** 2026-02-04 00:59:40.441260 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.441263 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.441267 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.441271 | orchestrator | 2026-02-04 00:59:40.441275 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-04 00:59:40.441279 | orchestrator | Wednesday 04 February 2026 00:54:35 +0000 (0:00:02.122) 0:02:51.019 **** 2026-02-04 00:59:40.441283 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.441286 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.441290 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.441294 | orchestrator | 2026-02-04 00:59:40.441298 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-04 00:59:40.441302 | orchestrator | Wednesday 04 February 2026 00:54:36 +0000 (0:00:00.706) 0:02:51.726 **** 2026-02-04 00:59:40.441306 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.441310 | orchestrator | 2026-02-04 00:59:40.441314 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-04 00:59:40.441317 | orchestrator | Wednesday 04 February 2026 00:54:37 +0000 (0:00:01.108) 0:02:52.835 **** 2026-02-04 00:59:40.441325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 00:59:40.441335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.441340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 00:59:40.441352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.441360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 00:59:40.441367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.441374 | orchestrator | 2026-02-04 00:59:40.441378 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-04 00:59:40.441381 | orchestrator | Wednesday 04 February 2026 00:54:44 +0000 (0:00:06.535) 0:02:59.371 **** 2026-02-04 00:59:40.441387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 00:59:40.441395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.441403 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.441409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 00:59:40.441416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.441424 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.441429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 00:59:40.441436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.441443 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.441447 | orchestrator | 2026-02-04 00:59:40.441451 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-04 00:59:40.441455 | orchestrator | Wednesday 04 February 2026 00:54:48 +0000 (0:00:04.793) 0:03:04.165 **** 2026-02-04 00:59:40.441460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:59:40.441465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:59:40.441469 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.441477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:59:40.441482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:59:40.441486 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.441491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:59:40.441495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:59:40.441503 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.441508 | orchestrator | 2026-02-04 00:59:40.441514 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-04 00:59:40.441521 | orchestrator | Wednesday 04 February 2026 00:54:52 +0000 (0:00:04.038) 0:03:08.204 **** 2026-02-04 00:59:40.441527 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.441536 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.441544 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.441552 | orchestrator | 2026-02-04 00:59:40.441558 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-04 00:59:40.441566 | orchestrator | Wednesday 04 February 2026 00:54:54 +0000 (0:00:01.570) 0:03:09.774 **** 2026-02-04 00:59:40.441573 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.441580 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.441586 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.441592 | orchestrator | 2026-02-04 00:59:40.441599 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-04 00:59:40.441610 | orchestrator | Wednesday 04 February 2026 00:54:56 +0000 (0:00:02.592) 0:03:12.366 **** 2026-02-04 00:59:40.441617 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.441623 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.441630 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.441636 | orchestrator | 2026-02-04 00:59:40.441642 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-04 00:59:40.441648 | orchestrator | Wednesday 04 February 2026 00:54:57 +0000 (0:00:00.715) 0:03:13.082 **** 2026-02-04 00:59:40.441710 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.441719 | orchestrator | 2026-02-04 00:59:40.441725 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-04 00:59:40.441732 | orchestrator | Wednesday 04 February 2026 00:54:58 +0000 (0:00:00.959) 0:03:14.042 **** 2026-02-04 00:59:40.441738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 00:59:40.441750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 00:59:40.441766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 00:59:40.441778 | orchestrator | 2026-02-04 00:59:40.441783 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-04 00:59:40.441788 | orchestrator | Wednesday 04 February 2026 00:55:03 +0000 (0:00:04.655) 0:03:18.697 **** 2026-02-04 00:59:40.441792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 00:59:40.441801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 00:59:40.441806 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.441811 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.441816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 00:59:40.441841 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.441846 | orchestrator | 2026-02-04 00:59:40.441850 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-04 00:59:40.441853 | orchestrator | Wednesday 04 February 2026 00:55:04 +0000 (0:00:00.847) 0:03:19.545 **** 2026-02-04 00:59:40.441858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:59:40.441862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:59:40.441866 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.441870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:59:40.441876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:59:40.441880 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.441884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:59:40.441891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:59:40.441894 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.441898 | orchestrator | 2026-02-04 00:59:40.441902 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-04 00:59:40.441906 | orchestrator | Wednesday 04 February 2026 00:55:04 +0000 (0:00:00.804) 0:03:20.349 **** 2026-02-04 00:59:40.441910 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.441913 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.441917 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.441921 | orchestrator | 2026-02-04 00:59:40.441925 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-04 00:59:40.441929 | orchestrator | Wednesday 04 February 2026 00:55:06 +0000 (0:00:01.533) 0:03:21.883 **** 2026-02-04 00:59:40.441933 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.441936 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.441940 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.441944 | orchestrator | 2026-02-04 00:59:40.441948 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-04 00:59:40.441952 | orchestrator | Wednesday 04 February 2026 00:55:09 +0000 (0:00:02.609) 0:03:24.492 **** 2026-02-04 00:59:40.441955 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.441959 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.441963 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.441967 | orchestrator | 2026-02-04 00:59:40.441971 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-04 00:59:40.441974 | orchestrator | Wednesday 04 February 2026 00:55:09 +0000 (0:00:00.683) 0:03:25.175 **** 2026-02-04 00:59:40.441978 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.441982 | orchestrator | 2026-02-04 00:59:40.441986 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-04 00:59:40.441990 | orchestrator | Wednesday 04 February 2026 00:55:10 +0000 (0:00:01.120) 0:03:26.296 **** 2026-02-04 00:59:40.441998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:59:40.442010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:59:40.442055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:59:40.442064 | orchestrator | 2026-02-04 00:59:40.442068 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-04 00:59:40.442072 | orchestrator | Wednesday 04 February 2026 00:55:16 +0000 (0:00:05.404) 0:03:31.700 **** 2026-02-04 00:59:40.442319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:59:40.442390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:59:40.442410 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.442416 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.442432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:59:40.442437 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.442441 | orchestrator | 2026-02-04 00:59:40.442446 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-04 00:59:40.442451 | orchestrator | Wednesday 04 February 2026 00:55:18 +0000 (0:00:01.823) 0:03:33.524 **** 2026-02-04 00:59:40.442456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:59:40.442469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:59:40.442475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:59:40.442483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:59:40.442488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 00:59:40.442493 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.442497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:59:40.442501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:59:40.442505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:59:40.442509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:59:40.442513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 00:59:40.442517 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.442521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:59:40.442529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:59:40.442534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:59:40.442541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:59:40.442546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 00:59:40.442550 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.442554 | orchestrator | 2026-02-04 00:59:40.442558 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-04 00:59:40.442562 | orchestrator | Wednesday 04 February 2026 00:55:19 +0000 (0:00:01.159) 0:03:34.683 **** 2026-02-04 00:59:40.442566 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.442570 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.442575 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.442579 | orchestrator | 2026-02-04 00:59:40.442583 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-04 00:59:40.442587 | orchestrator | Wednesday 04 February 2026 00:55:20 +0000 (0:00:01.412) 0:03:36.095 **** 2026-02-04 00:59:40.442591 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.442595 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.442602 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.442606 | orchestrator | 2026-02-04 00:59:40.442610 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-04 00:59:40.442614 | orchestrator | Wednesday 04 February 2026 00:55:23 +0000 (0:00:02.516) 0:03:38.612 **** 2026-02-04 00:59:40.442618 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.442622 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.442626 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.442630 | orchestrator | 2026-02-04 00:59:40.442634 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-04 00:59:40.442638 | orchestrator | Wednesday 04 February 2026 00:55:23 +0000 (0:00:00.383) 0:03:38.996 **** 2026-02-04 00:59:40.442642 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.442646 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.442650 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.442654 | orchestrator | 2026-02-04 00:59:40.442659 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-04 00:59:40.442663 | orchestrator | Wednesday 04 February 2026 00:55:24 +0000 (0:00:00.701) 0:03:39.697 **** 2026-02-04 00:59:40.442667 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.442670 | orchestrator | 2026-02-04 00:59:40.442674 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-04 00:59:40.442679 | orchestrator | Wednesday 04 February 2026 00:55:25 +0000 (0:00:01.071) 0:03:40.769 **** 2026-02-04 00:59:40.442683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:59:40.442726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:59:40.442736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:59:40.442746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:59:40.442753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:59:40.442760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:59:40.442768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:59:40.442785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:59:40.442792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:59:40.442798 | orchestrator | 2026-02-04 00:59:40.442805 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-04 00:59:40.442812 | orchestrator | Wednesday 04 February 2026 00:55:30 +0000 (0:00:04.775) 0:03:45.544 **** 2026-02-04 00:59:40.442819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:59:40.442824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:59:40.442828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:59:40.442836 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.442843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:59:40.442849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:59:40.442855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:59:40.442860 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.442864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:59:40.442868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:59:40.442876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:59:40.442880 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.442884 | orchestrator | 2026-02-04 00:59:40.442888 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-04 00:59:40.442895 | orchestrator | Wednesday 04 February 2026 00:55:30 +0000 (0:00:00.683) 0:03:46.228 **** 2026-02-04 00:59:40.442902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:59:40.442908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:59:40.442913 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.442918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:59:40.442923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:59:40.442928 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.442933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:59:40.442941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:59:40.442946 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.442951 | orchestrator | 2026-02-04 00:59:40.442955 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-04 00:59:40.442961 | orchestrator | Wednesday 04 February 2026 00:55:31 +0000 (0:00:01.043) 0:03:47.271 **** 2026-02-04 00:59:40.442965 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.442970 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.442974 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.442979 | orchestrator | 2026-02-04 00:59:40.442984 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-04 00:59:40.442988 | orchestrator | Wednesday 04 February 2026 00:55:33 +0000 (0:00:01.480) 0:03:48.751 **** 2026-02-04 00:59:40.442999 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.443004 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.443009 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.443014 | orchestrator | 2026-02-04 00:59:40.443018 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-04 00:59:40.443023 | orchestrator | Wednesday 04 February 2026 00:55:35 +0000 (0:00:02.546) 0:03:51.298 **** 2026-02-04 00:59:40.443028 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.443032 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.443037 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.443042 | orchestrator | 2026-02-04 00:59:40.443047 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-04 00:59:40.443051 | orchestrator | Wednesday 04 February 2026 00:55:36 +0000 (0:00:00.663) 0:03:51.962 **** 2026-02-04 00:59:40.443058 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.443065 | orchestrator | 2026-02-04 00:59:40.443072 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-04 00:59:40.443078 | orchestrator | Wednesday 04 February 2026 00:55:37 +0000 (0:00:01.151) 0:03:53.114 **** 2026-02-04 00:59:40.443087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 00:59:40.443099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 00:59:40.443116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 00:59:40.443131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443137 | orchestrator | 2026-02-04 00:59:40.443143 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-04 00:59:40.443149 | orchestrator | Wednesday 04 February 2026 00:55:41 +0000 (0:00:03.930) 0:03:57.045 **** 2026-02-04 00:59:40.443160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 00:59:40.443167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443178 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.443188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 00:59:40.443195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443202 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.443213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 00:59:40.443220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443226 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.443230 | orchestrator | 2026-02-04 00:59:40.443234 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-04 00:59:40.443238 | orchestrator | Wednesday 04 February 2026 00:55:42 +0000 (0:00:01.101) 0:03:58.146 **** 2026-02-04 00:59:40.443242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:59:40.443252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:59:40.443257 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.443263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:59:40.443267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:59:40.443271 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.443275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:59:40.443280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:59:40.443284 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.443288 | orchestrator | 2026-02-04 00:59:40.443292 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-04 00:59:40.443296 | orchestrator | Wednesday 04 February 2026 00:55:43 +0000 (0:00:01.119) 0:03:59.266 **** 2026-02-04 00:59:40.443300 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.443304 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.443308 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.443312 | orchestrator | 2026-02-04 00:59:40.443317 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-04 00:59:40.443320 | orchestrator | Wednesday 04 February 2026 00:55:45 +0000 (0:00:01.646) 0:04:00.912 **** 2026-02-04 00:59:40.443324 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.443328 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.443332 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.443336 | orchestrator | 2026-02-04 00:59:40.443340 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-04 00:59:40.443344 | orchestrator | Wednesday 04 February 2026 00:55:47 +0000 (0:00:02.405) 0:04:03.318 **** 2026-02-04 00:59:40.443348 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.443352 | orchestrator | 2026-02-04 00:59:40.443357 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-04 00:59:40.443361 | orchestrator | Wednesday 04 February 2026 00:55:49 +0000 (0:00:01.589) 0:04:04.908 **** 2026-02-04 00:59:40.443366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 00:59:40.443375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 00:59:40.443394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 00:59:40.443442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443456 | orchestrator | 2026-02-04 00:59:40.443461 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-04 00:59:40.443465 | orchestrator | Wednesday 04 February 2026 00:55:53 +0000 (0:00:03.925) 0:04:08.833 **** 2026-02-04 00:59:40.443472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 00:59:40.443482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443497 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.443502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 00:59:40.443506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443526 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.443530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 00:59:40.443537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.443550 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.443554 | orchestrator | 2026-02-04 00:59:40.443558 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-04 00:59:40.443563 | orchestrator | Wednesday 04 February 2026 00:55:54 +0000 (0:00:00.760) 0:04:09.594 **** 2026-02-04 00:59:40.443572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:59:40.443576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:59:40.443580 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.443584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:59:40.443593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:59:40.443600 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.443606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:59:40.443612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:59:40.443619 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.443625 | orchestrator | 2026-02-04 00:59:40.443631 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-04 00:59:40.443637 | orchestrator | Wednesday 04 February 2026 00:55:55 +0000 (0:00:01.454) 0:04:11.049 **** 2026-02-04 00:59:40.443642 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.443648 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.443654 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.443660 | orchestrator | 2026-02-04 00:59:40.443666 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-04 00:59:40.443672 | orchestrator | Wednesday 04 February 2026 00:55:57 +0000 (0:00:01.465) 0:04:12.514 **** 2026-02-04 00:59:40.443677 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.443684 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.443689 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.443788 | orchestrator | 2026-02-04 00:59:40.443799 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-04 00:59:40.443807 | orchestrator | Wednesday 04 February 2026 00:55:59 +0000 (0:00:02.450) 0:04:14.964 **** 2026-02-04 00:59:40.443813 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.443820 | orchestrator | 2026-02-04 00:59:40.443827 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-04 00:59:40.443831 | orchestrator | Wednesday 04 February 2026 00:56:01 +0000 (0:00:01.569) 0:04:16.534 **** 2026-02-04 00:59:40.443842 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 00:59:40.443846 | orchestrator | 2026-02-04 00:59:40.443850 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-04 00:59:40.443854 | orchestrator | Wednesday 04 February 2026 00:56:04 +0000 (0:00:03.375) 0:04:19.909 **** 2026-02-04 00:59:40.443860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:59:40.443880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:59:40.443885 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.443892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:59:40.443897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:59:40.443905 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.443913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:59:40.443919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:59:40.443923 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.443927 | orchestrator | 2026-02-04 00:59:40.443931 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-04 00:59:40.443935 | orchestrator | Wednesday 04 February 2026 00:56:07 +0000 (0:00:02.511) 0:04:22.421 **** 2026-02-04 00:59:40.443942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:59:40.443950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:59:40.443954 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.443962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:59:40.443969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:59:40.443974 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.443983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:59:40.444048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:59:40.444057 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.444061 | orchestrator | 2026-02-04 00:59:40.444065 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-04 00:59:40.444069 | orchestrator | Wednesday 04 February 2026 00:56:09 +0000 (0:00:02.779) 0:04:25.200 **** 2026-02-04 00:59:40.444077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:59:40.444091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:59:40.444102 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.444120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:59:40.444127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:59:40.444134 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.444140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:59:40.444154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:59:40.444162 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.444169 | orchestrator | 2026-02-04 00:59:40.444175 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-04 00:59:40.444182 | orchestrator | Wednesday 04 February 2026 00:56:13 +0000 (0:00:03.315) 0:04:28.515 **** 2026-02-04 00:59:40.444189 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.444195 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.444201 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.444207 | orchestrator | 2026-02-04 00:59:40.444213 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-04 00:59:40.444219 | orchestrator | Wednesday 04 February 2026 00:56:15 +0000 (0:00:02.069) 0:04:30.584 **** 2026-02-04 00:59:40.444226 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.444232 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.444238 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.444244 | orchestrator | 2026-02-04 00:59:40.444250 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-04 00:59:40.444256 | orchestrator | Wednesday 04 February 2026 00:56:16 +0000 (0:00:01.647) 0:04:32.232 **** 2026-02-04 00:59:40.444263 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.444269 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.444275 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.444281 | orchestrator | 2026-02-04 00:59:40.444287 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-04 00:59:40.444300 | orchestrator | Wednesday 04 February 2026 00:56:17 +0000 (0:00:00.373) 0:04:32.605 **** 2026-02-04 00:59:40.444307 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.444313 | orchestrator | 2026-02-04 00:59:40.444319 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-04 00:59:40.444326 | orchestrator | Wednesday 04 February 2026 00:56:18 +0000 (0:00:01.565) 0:04:34.171 **** 2026-02-04 00:59:40.444350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 00:59:40.444361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 00:59:40.444369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 00:59:40.444377 | orchestrator | 2026-02-04 00:59:40.444384 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-04 00:59:40.444391 | orchestrator | Wednesday 04 February 2026 00:56:20 +0000 (0:00:01.676) 0:04:35.847 **** 2026-02-04 00:59:40.444404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 00:59:40.444412 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.444419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 00:59:40.444432 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.444443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 00:59:40.444451 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.444457 | orchestrator | 2026-02-04 00:59:40.444464 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-04 00:59:40.444471 | orchestrator | Wednesday 04 February 2026 00:56:20 +0000 (0:00:00.441) 0:04:36.289 **** 2026-02-04 00:59:40.444479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 00:59:40.444488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 00:59:40.444495 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.444502 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.444507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 00:59:40.444511 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.444515 | orchestrator | 2026-02-04 00:59:40.444519 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-04 00:59:40.444523 | orchestrator | Wednesday 04 February 2026 00:56:21 +0000 (0:00:01.036) 0:04:37.326 **** 2026-02-04 00:59:40.444527 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.444531 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.444535 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.444539 | orchestrator | 2026-02-04 00:59:40.444544 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-04 00:59:40.444547 | orchestrator | Wednesday 04 February 2026 00:56:22 +0000 (0:00:00.524) 0:04:37.851 **** 2026-02-04 00:59:40.444551 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.444555 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.444559 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.444563 | orchestrator | 2026-02-04 00:59:40.444567 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-04 00:59:40.444571 | orchestrator | Wednesday 04 February 2026 00:56:24 +0000 (0:00:01.562) 0:04:39.413 **** 2026-02-04 00:59:40.444583 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.444588 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.444592 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.444596 | orchestrator | 2026-02-04 00:59:40.444600 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-04 00:59:40.444608 | orchestrator | Wednesday 04 February 2026 00:56:24 +0000 (0:00:00.354) 0:04:39.768 **** 2026-02-04 00:59:40.444613 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.444617 | orchestrator | 2026-02-04 00:59:40.444621 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-04 00:59:40.444625 | orchestrator | Wednesday 04 February 2026 00:56:26 +0000 (0:00:01.667) 0:04:41.435 **** 2026-02-04 00:59:40.444639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 00:59:40.444648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:59:40.444676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.444739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.444791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.444799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 00:59:40.444817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:59:40.444844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 00:59:40.444860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.444901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:59:40.444907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.444951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.444968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.444973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.444989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.444999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.445007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.445011 | orchestrator | 2026-02-04 00:59:40.445016 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-04 00:59:40.445020 | orchestrator | Wednesday 04 February 2026 00:56:30 +0000 (0:00:04.901) 0:04:46.337 **** 2026-02-04 00:59:40.445025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 00:59:40.445032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:59:40.445055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 00:59:40.445062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.445102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:59:40.445121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 00:59:40.445126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.445183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:59:40.445190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.445198 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.445203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.445217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.445259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.445280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.445292 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.445302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:59:40.445323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.445330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:59:40.445339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:59:40.445346 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.445353 | orchestrator | 2026-02-04 00:59:40.445359 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-04 00:59:40.445365 | orchestrator | Wednesday 04 February 2026 00:56:32 +0000 (0:00:01.670) 0:04:48.007 **** 2026-02-04 00:59:40.445372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:59:40.445382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:59:40.445389 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.445395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:59:40.445401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:59:40.445407 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.445412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:59:40.445419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:59:40.445425 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.445431 | orchestrator | 2026-02-04 00:59:40.445437 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-04 00:59:40.445443 | orchestrator | Wednesday 04 February 2026 00:56:35 +0000 (0:00:02.425) 0:04:50.433 **** 2026-02-04 00:59:40.445449 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.445456 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.445462 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.445469 | orchestrator | 2026-02-04 00:59:40.445475 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-04 00:59:40.445481 | orchestrator | Wednesday 04 February 2026 00:56:36 +0000 (0:00:01.362) 0:04:51.795 **** 2026-02-04 00:59:40.445487 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.445494 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.445500 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.445507 | orchestrator | 2026-02-04 00:59:40.445512 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-04 00:59:40.445516 | orchestrator | Wednesday 04 February 2026 00:56:38 +0000 (0:00:02.293) 0:04:54.089 **** 2026-02-04 00:59:40.445520 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.445524 | orchestrator | 2026-02-04 00:59:40.445528 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-04 00:59:40.445532 | orchestrator | Wednesday 04 February 2026 00:56:40 +0000 (0:00:01.426) 0:04:55.515 **** 2026-02-04 00:59:40.445536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.445546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.445626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.445643 | orchestrator | 2026-02-04 00:59:40.445647 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-04 00:59:40.445651 | orchestrator | Wednesday 04 February 2026 00:56:44 +0000 (0:00:04.458) 0:04:59.974 **** 2026-02-04 00:59:40.445658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.445662 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.445667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.445671 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.445680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.445690 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.445713 | orchestrator | 2026-02-04 00:59:40.445720 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-04 00:59:40.445726 | orchestrator | Wednesday 04 February 2026 00:56:45 +0000 (0:00:00.693) 0:05:00.667 **** 2026-02-04 00:59:40.445732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:59:40.445736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:59:40.445741 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.445745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:59:40.445748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:59:40.445753 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.445756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:59:40.445764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:59:40.445768 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.445772 | orchestrator | 2026-02-04 00:59:40.445775 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-04 00:59:40.445779 | orchestrator | Wednesday 04 February 2026 00:56:46 +0000 (0:00:00.973) 0:05:01.640 **** 2026-02-04 00:59:40.445783 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.445787 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.445791 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.445795 | orchestrator | 2026-02-04 00:59:40.445799 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-04 00:59:40.445802 | orchestrator | Wednesday 04 February 2026 00:56:48 +0000 (0:00:02.167) 0:05:03.808 **** 2026-02-04 00:59:40.445806 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.445810 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.445814 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.445818 | orchestrator | 2026-02-04 00:59:40.445822 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-04 00:59:40.445826 | orchestrator | Wednesday 04 February 2026 00:56:50 +0000 (0:00:01.957) 0:05:05.765 **** 2026-02-04 00:59:40.445830 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.445834 | orchestrator | 2026-02-04 00:59:40.445837 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-04 00:59:40.445845 | orchestrator | Wednesday 04 February 2026 00:56:52 +0000 (0:00:01.816) 0:05:07.582 **** 2026-02-04 00:59:40.445850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.446010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.446130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.446175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446191 | orchestrator | 2026-02-04 00:59:40.446196 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-04 00:59:40.446201 | orchestrator | Wednesday 04 February 2026 00:56:57 +0000 (0:00:05.039) 0:05:12.621 **** 2026-02-04 00:59:40.446206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.446214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446265 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.446272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.446288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446301 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.446305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.446322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.446331 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.446335 | orchestrator | 2026-02-04 00:59:40.446339 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-04 00:59:40.446343 | orchestrator | Wednesday 04 February 2026 00:56:58 +0000 (0:00:01.586) 0:05:14.208 **** 2026-02-04 00:59:40.446348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446373 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.446377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446394 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.446398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:59:40.446533 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.446541 | orchestrator | 2026-02-04 00:59:40.446554 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-04 00:59:40.446562 | orchestrator | Wednesday 04 February 2026 00:56:59 +0000 (0:00:01.063) 0:05:15.272 **** 2026-02-04 00:59:40.446569 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.446575 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.446582 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.446588 | orchestrator | 2026-02-04 00:59:40.446595 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-04 00:59:40.446603 | orchestrator | Wednesday 04 February 2026 00:57:01 +0000 (0:00:01.636) 0:05:16.909 **** 2026-02-04 00:59:40.446610 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.446615 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.446620 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.446624 | orchestrator | 2026-02-04 00:59:40.446629 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-04 00:59:40.446634 | orchestrator | Wednesday 04 February 2026 00:57:03 +0000 (0:00:02.331) 0:05:19.240 **** 2026-02-04 00:59:40.446641 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.446647 | orchestrator | 2026-02-04 00:59:40.446654 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-04 00:59:40.446660 | orchestrator | Wednesday 04 February 2026 00:57:05 +0000 (0:00:01.830) 0:05:21.071 **** 2026-02-04 00:59:40.446674 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-04 00:59:40.446685 | orchestrator | 2026-02-04 00:59:40.446716 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-04 00:59:40.446724 | orchestrator | Wednesday 04 February 2026 00:57:06 +0000 (0:00:00.969) 0:05:22.040 **** 2026-02-04 00:59:40.446736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 00:59:40.446753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 00:59:40.446761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 00:59:40.446768 | orchestrator | 2026-02-04 00:59:40.446775 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-04 00:59:40.446782 | orchestrator | Wednesday 04 February 2026 00:57:11 +0000 (0:00:05.154) 0:05:27.195 **** 2026-02-04 00:59:40.446790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:59:40.446796 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.446804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:59:40.446811 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.446823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:59:40.446836 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.446843 | orchestrator | 2026-02-04 00:59:40.446850 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-04 00:59:40.446857 | orchestrator | Wednesday 04 February 2026 00:57:12 +0000 (0:00:01.123) 0:05:28.318 **** 2026-02-04 00:59:40.446864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:59:40.446872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:59:40.446881 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.446888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:59:40.446901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:59:40.446906 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.446912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:59:40.446919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:59:40.446928 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.446940 | orchestrator | 2026-02-04 00:59:40.446947 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 00:59:40.446953 | orchestrator | Wednesday 04 February 2026 00:57:14 +0000 (0:00:01.869) 0:05:30.188 **** 2026-02-04 00:59:40.446960 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.446967 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.446974 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.446980 | orchestrator | 2026-02-04 00:59:40.446986 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 00:59:40.446992 | orchestrator | Wednesday 04 February 2026 00:57:17 +0000 (0:00:02.799) 0:05:32.987 **** 2026-02-04 00:59:40.446999 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.447006 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.447012 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.447018 | orchestrator | 2026-02-04 00:59:40.447038 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-04 00:59:40.447045 | orchestrator | Wednesday 04 February 2026 00:57:20 +0000 (0:00:03.336) 0:05:36.323 **** 2026-02-04 00:59:40.447053 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-04 00:59:40.447061 | orchestrator | 2026-02-04 00:59:40.447067 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-04 00:59:40.447075 | orchestrator | Wednesday 04 February 2026 00:57:22 +0000 (0:00:01.738) 0:05:38.062 **** 2026-02-04 00:59:40.447082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:59:40.447099 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.447113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:59:40.447121 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.447126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:59:40.447130 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.447134 | orchestrator | 2026-02-04 00:59:40.447138 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-04 00:59:40.447143 | orchestrator | Wednesday 04 February 2026 00:57:24 +0000 (0:00:01.498) 0:05:39.560 **** 2026-02-04 00:59:40.447151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:59:40.447155 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.447159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:59:40.447164 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.447168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:59:40.447172 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.447176 | orchestrator | 2026-02-04 00:59:40.447180 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-04 00:59:40.447184 | orchestrator | Wednesday 04 February 2026 00:57:25 +0000 (0:00:01.586) 0:05:41.147 **** 2026-02-04 00:59:40.447188 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.447197 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.447201 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.447205 | orchestrator | 2026-02-04 00:59:40.447209 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 00:59:40.447213 | orchestrator | Wednesday 04 February 2026 00:57:27 +0000 (0:00:02.176) 0:05:43.323 **** 2026-02-04 00:59:40.447217 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.447221 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.447225 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.447229 | orchestrator | 2026-02-04 00:59:40.447233 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 00:59:40.447237 | orchestrator | Wednesday 04 February 2026 00:57:30 +0000 (0:00:02.476) 0:05:45.799 **** 2026-02-04 00:59:40.447242 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.447245 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.447250 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.447253 | orchestrator | 2026-02-04 00:59:40.447257 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-04 00:59:40.447261 | orchestrator | Wednesday 04 February 2026 00:57:33 +0000 (0:00:03.468) 0:05:49.268 **** 2026-02-04 00:59:40.447265 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-04 00:59:40.447270 | orchestrator | 2026-02-04 00:59:40.447274 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-04 00:59:40.447278 | orchestrator | Wednesday 04 February 2026 00:57:34 +0000 (0:00:00.938) 0:05:50.207 **** 2026-02-04 00:59:40.447287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:59:40.447291 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.447295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:59:40.447300 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.447307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:59:40.447312 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.447316 | orchestrator | 2026-02-04 00:59:40.447320 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-04 00:59:40.447325 | orchestrator | Wednesday 04 February 2026 00:57:36 +0000 (0:00:01.606) 0:05:51.814 **** 2026-02-04 00:59:40.447329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:59:40.447336 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.447341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:59:40.447345 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.447349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:59:40.447354 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.447358 | orchestrator | 2026-02-04 00:59:40.447362 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-04 00:59:40.447366 | orchestrator | Wednesday 04 February 2026 00:57:38 +0000 (0:00:01.560) 0:05:53.374 **** 2026-02-04 00:59:40.447373 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.447377 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.447381 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.447385 | orchestrator | 2026-02-04 00:59:40.447389 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 00:59:40.447393 | orchestrator | Wednesday 04 February 2026 00:57:39 +0000 (0:00:01.769) 0:05:55.144 **** 2026-02-04 00:59:40.447397 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.447594 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.447712 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.447729 | orchestrator | 2026-02-04 00:59:40.447736 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 00:59:40.447744 | orchestrator | Wednesday 04 February 2026 00:57:42 +0000 (0:00:02.693) 0:05:57.838 **** 2026-02-04 00:59:40.447750 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.447756 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.447762 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.447769 | orchestrator | 2026-02-04 00:59:40.447775 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-04 00:59:40.447782 | orchestrator | Wednesday 04 February 2026 00:57:46 +0000 (0:00:03.675) 0:06:01.513 **** 2026-02-04 00:59:40.447789 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.447796 | orchestrator | 2026-02-04 00:59:40.447802 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-04 00:59:40.447809 | orchestrator | Wednesday 04 February 2026 00:57:48 +0000 (0:00:02.011) 0:06:03.524 **** 2026-02-04 00:59:40.447825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.447839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:59:40.447845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.447851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.447856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.447868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.447872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:59:40.447882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.447887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.447891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.447895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.447902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:59:40.447907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.447920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.447925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.447930 | orchestrator | 2026-02-04 00:59:40.447934 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-04 00:59:40.447938 | orchestrator | Wednesday 04 February 2026 00:57:52 +0000 (0:00:03.895) 0:06:07.420 **** 2026-02-04 00:59:40.447942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.447946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:59:40.447954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.447958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.447968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.447973 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.447978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.447982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:59:40.447986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.447994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.447999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.448011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.448015 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.448020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:59:40.448024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.448029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:59:40.448035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:59:40.448039 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.448043 | orchestrator | 2026-02-04 00:59:40.448047 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-04 00:59:40.448055 | orchestrator | Wednesday 04 February 2026 00:57:52 +0000 (0:00:00.866) 0:06:08.287 **** 2026-02-04 00:59:40.448061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:59:40.448068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:59:40.448075 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.448081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:59:40.448088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:59:40.448099 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.448107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:59:40.448153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:59:40.448166 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.448174 | orchestrator | 2026-02-04 00:59:40.448180 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-04 00:59:40.448187 | orchestrator | Wednesday 04 February 2026 00:57:54 +0000 (0:00:01.970) 0:06:10.258 **** 2026-02-04 00:59:40.448210 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.448218 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.448225 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.448231 | orchestrator | 2026-02-04 00:59:40.448238 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-04 00:59:40.448244 | orchestrator | Wednesday 04 February 2026 00:57:56 +0000 (0:00:01.637) 0:06:11.895 **** 2026-02-04 00:59:40.448251 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.448258 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.448264 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.448271 | orchestrator | 2026-02-04 00:59:40.448277 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-04 00:59:40.448284 | orchestrator | Wednesday 04 February 2026 00:57:59 +0000 (0:00:02.480) 0:06:14.376 **** 2026-02-04 00:59:40.448291 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.448297 | orchestrator | 2026-02-04 00:59:40.448303 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-04 00:59:40.448328 | orchestrator | Wednesday 04 February 2026 00:58:00 +0000 (0:00:01.562) 0:06:15.938 **** 2026-02-04 00:59:40.448335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:59:40.448355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:59:40.448363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:59:40.448375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:59:40.448384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:59:40.448397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:59:40.448408 | orchestrator | 2026-02-04 00:59:40.448413 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-04 00:59:40.448418 | orchestrator | Wednesday 04 February 2026 00:58:07 +0000 (0:00:06.840) 0:06:22.779 **** 2026-02-04 00:59:40.448424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:59:40.448431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:59:40.448437 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.448442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:59:40.448450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:59:40.448618 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.448627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:59:40.448635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:59:40.448640 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.448644 | orchestrator | 2026-02-04 00:59:40.448648 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-04 00:59:40.448652 | orchestrator | Wednesday 04 February 2026 00:58:08 +0000 (0:00:00.738) 0:06:23.517 **** 2026-02-04 00:59:40.448656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-04 00:59:40.448661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:59:40.448665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:59:40.448675 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.448679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-04 00:59:40.448682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:59:40.448686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:59:40.448690 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.448793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-04 00:59:40.448799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:59:40.448807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:59:40.448811 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.448815 | orchestrator | 2026-02-04 00:59:40.448819 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-04 00:59:40.448823 | orchestrator | Wednesday 04 February 2026 00:58:09 +0000 (0:00:01.066) 0:06:24.584 **** 2026-02-04 00:59:40.448827 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.448832 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.448838 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.448844 | orchestrator | 2026-02-04 00:59:40.448853 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-04 00:59:40.448861 | orchestrator | Wednesday 04 February 2026 00:58:10 +0000 (0:00:01.024) 0:06:25.608 **** 2026-02-04 00:59:40.448867 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.448872 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.448879 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.448885 | orchestrator | 2026-02-04 00:59:40.448891 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-04 00:59:40.448897 | orchestrator | Wednesday 04 February 2026 00:58:11 +0000 (0:00:01.687) 0:06:27.296 **** 2026-02-04 00:59:40.448903 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.448909 | orchestrator | 2026-02-04 00:59:40.448915 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-04 00:59:40.448921 | orchestrator | Wednesday 04 February 2026 00:58:13 +0000 (0:00:01.704) 0:06:29.000 **** 2026-02-04 00:59:40.448934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 00:59:40.448948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:59:40.448956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.448963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.448968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.448977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 00:59:40.448982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 00:59:40.448988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:59:40.448997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:59:40.449001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.449027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.449042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 00:59:40.449047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:59:40.449052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.449083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 00:59:40.449094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:59:40.449101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 00:59:40.449112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:59:40.449118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.449147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.449151 | orchestrator | 2026-02-04 00:59:40.449155 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-04 00:59:40.449159 | orchestrator | Wednesday 04 February 2026 00:58:19 +0000 (0:00:05.502) 0:06:34.502 **** 2026-02-04 00:59:40.449165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 00:59:40.449169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:59:40.449174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.449229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 00:59:40.449236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:59:40.449243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.449261 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.449269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 00:59:40.449343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:59:40.449352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.449514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 00:59:40.449530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:59:40.449535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 00:59:40.449539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:59:40.449550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.449569 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.449576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.449585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 00:59:40.449591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:59:40.449600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:59:40.449610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:59:40.449614 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.449618 | orchestrator | 2026-02-04 00:59:40.449622 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-04 00:59:40.449626 | orchestrator | Wednesday 04 February 2026 00:58:20 +0000 (0:00:01.629) 0:06:36.132 **** 2026-02-04 00:59:40.449630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-04 00:59:40.449635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-04 00:59:40.449639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:59:40.449644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:59:40.449649 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.449653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-04 00:59:40.449657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-04 00:59:40.449661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-04 00:59:40.449668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:59:40.449687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-04 00:59:40.449765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:59:40.449773 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.449780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:59:40.449787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:59:40.449793 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.449799 | orchestrator | 2026-02-04 00:59:40.449805 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-04 00:59:40.449811 | orchestrator | Wednesday 04 February 2026 00:58:22 +0000 (0:00:01.250) 0:06:37.382 **** 2026-02-04 00:59:40.449818 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.449822 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.449827 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.449830 | orchestrator | 2026-02-04 00:59:40.449834 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-04 00:59:40.449838 | orchestrator | Wednesday 04 February 2026 00:58:22 +0000 (0:00:00.576) 0:06:37.958 **** 2026-02-04 00:59:40.449842 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.449846 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.449850 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.449853 | orchestrator | 2026-02-04 00:59:40.449858 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-04 00:59:40.449862 | orchestrator | Wednesday 04 February 2026 00:58:24 +0000 (0:00:01.866) 0:06:39.825 **** 2026-02-04 00:59:40.449866 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.449869 | orchestrator | 2026-02-04 00:59:40.449899 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-04 00:59:40.449904 | orchestrator | Wednesday 04 February 2026 00:58:26 +0000 (0:00:02.220) 0:06:42.046 **** 2026-02-04 00:59:40.449909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:59:40.449914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:59:40.449929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:59:40.449933 | orchestrator | 2026-02-04 00:59:40.449937 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-04 00:59:40.449941 | orchestrator | Wednesday 04 February 2026 00:58:29 +0000 (0:00:03.130) 0:06:45.176 **** 2026-02-04 00:59:40.449972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 00:59:40.449978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 00:59:40.449986 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.449990 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.449994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 00:59:40.449998 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450002 | orchestrator | 2026-02-04 00:59:40.450006 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-04 00:59:40.450042 | orchestrator | Wednesday 04 February 2026 00:58:30 +0000 (0:00:00.567) 0:06:45.743 **** 2026-02-04 00:59:40.450050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 00:59:40.450054 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.450058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 00:59:40.450062 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.450066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 00:59:40.450070 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450075 | orchestrator | 2026-02-04 00:59:40.450078 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-04 00:59:40.450082 | orchestrator | Wednesday 04 February 2026 00:58:31 +0000 (0:00:01.409) 0:06:47.152 **** 2026-02-04 00:59:40.450086 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.450090 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.450094 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450098 | orchestrator | 2026-02-04 00:59:40.450102 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-04 00:59:40.450106 | orchestrator | Wednesday 04 February 2026 00:58:32 +0000 (0:00:00.575) 0:06:47.728 **** 2026-02-04 00:59:40.450110 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.450114 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.450118 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450122 | orchestrator | 2026-02-04 00:59:40.450126 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-04 00:59:40.450129 | orchestrator | Wednesday 04 February 2026 00:58:34 +0000 (0:00:01.765) 0:06:49.494 **** 2026-02-04 00:59:40.450133 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:59:40.450138 | orchestrator | 2026-02-04 00:59:40.450142 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-04 00:59:40.450149 | orchestrator | Wednesday 04 February 2026 00:58:36 +0000 (0:00:02.289) 0:06:51.784 **** 2026-02-04 00:59:40.450153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.450161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.450169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.450173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.450181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.450188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 00:59:40.450192 | orchestrator | 2026-02-04 00:59:40.450196 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-04 00:59:40.450200 | orchestrator | Wednesday 04 February 2026 00:58:44 +0000 (0:00:07.905) 0:06:59.690 **** 2026-02-04 00:59:40.450207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.450211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.450215 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.450222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.450230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.450234 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.450238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.450311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 00:59:40.450364 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450373 | orchestrator | 2026-02-04 00:59:40.450378 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-04 00:59:40.450383 | orchestrator | Wednesday 04 February 2026 00:58:45 +0000 (0:00:01.008) 0:07:00.698 **** 2026-02-04 00:59:40.450388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450447 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.450451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450467 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.450472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:59:40.450488 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450492 | orchestrator | 2026-02-04 00:59:40.450496 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-04 00:59:40.450499 | orchestrator | Wednesday 04 February 2026 00:58:47 +0000 (0:00:02.365) 0:07:03.063 **** 2026-02-04 00:59:40.450504 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.450508 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.450513 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.450516 | orchestrator | 2026-02-04 00:59:40.450521 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-04 00:59:40.450533 | orchestrator | Wednesday 04 February 2026 00:58:49 +0000 (0:00:01.430) 0:07:04.494 **** 2026-02-04 00:59:40.450537 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.450542 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.450546 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.450549 | orchestrator | 2026-02-04 00:59:40.450553 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-04 00:59:40.450558 | orchestrator | Wednesday 04 February 2026 00:58:51 +0000 (0:00:02.672) 0:07:07.166 **** 2026-02-04 00:59:40.450562 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.450570 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.450574 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450578 | orchestrator | 2026-02-04 00:59:40.450582 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-04 00:59:40.450586 | orchestrator | Wednesday 04 February 2026 00:58:52 +0000 (0:00:00.396) 0:07:07.563 **** 2026-02-04 00:59:40.450590 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.450594 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.450599 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450603 | orchestrator | 2026-02-04 00:59:40.450607 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-04 00:59:40.450612 | orchestrator | Wednesday 04 February 2026 00:58:52 +0000 (0:00:00.356) 0:07:07.920 **** 2026-02-04 00:59:40.450616 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.450620 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.450624 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450628 | orchestrator | 2026-02-04 00:59:40.450632 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-04 00:59:40.450637 | orchestrator | Wednesday 04 February 2026 00:58:53 +0000 (0:00:00.877) 0:07:08.797 **** 2026-02-04 00:59:40.450642 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.450646 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.450650 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450654 | orchestrator | 2026-02-04 00:59:40.450658 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-04 00:59:40.450662 | orchestrator | Wednesday 04 February 2026 00:58:53 +0000 (0:00:00.421) 0:07:09.218 **** 2026-02-04 00:59:40.450666 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.450671 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.450675 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450679 | orchestrator | 2026-02-04 00:59:40.450686 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-04 00:59:40.450690 | orchestrator | Wednesday 04 February 2026 00:58:54 +0000 (0:00:00.402) 0:07:09.621 **** 2026-02-04 00:59:40.450712 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.450719 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.450725 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.450731 | orchestrator | 2026-02-04 00:59:40.450738 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-04 00:59:40.450744 | orchestrator | Wednesday 04 February 2026 00:58:55 +0000 (0:00:01.209) 0:07:10.830 **** 2026-02-04 00:59:40.450751 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.450759 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.450766 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.450772 | orchestrator | 2026-02-04 00:59:40.450778 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-04 00:59:40.450785 | orchestrator | Wednesday 04 February 2026 00:58:56 +0000 (0:00:00.826) 0:07:11.657 **** 2026-02-04 00:59:40.450792 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.450798 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.450806 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.450811 | orchestrator | 2026-02-04 00:59:40.450819 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-04 00:59:40.450826 | orchestrator | Wednesday 04 February 2026 00:58:56 +0000 (0:00:00.429) 0:07:12.086 **** 2026-02-04 00:59:40.450832 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.450837 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.450843 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.450851 | orchestrator | 2026-02-04 00:59:40.450859 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-04 00:59:40.450866 | orchestrator | Wednesday 04 February 2026 00:58:57 +0000 (0:00:00.997) 0:07:13.083 **** 2026-02-04 00:59:40.450873 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.450879 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.450895 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.450903 | orchestrator | 2026-02-04 00:59:40.450911 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-04 00:59:40.450918 | orchestrator | Wednesday 04 February 2026 00:58:59 +0000 (0:00:01.504) 0:07:14.588 **** 2026-02-04 00:59:40.450925 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.450932 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.450940 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.450948 | orchestrator | 2026-02-04 00:59:40.450956 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-04 00:59:40.450965 | orchestrator | Wednesday 04 February 2026 00:59:00 +0000 (0:00:00.989) 0:07:15.577 **** 2026-02-04 00:59:40.450971 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.450977 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.450983 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.450990 | orchestrator | 2026-02-04 00:59:40.450997 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-04 00:59:40.451004 | orchestrator | Wednesday 04 February 2026 00:59:05 +0000 (0:00:05.279) 0:07:20.856 **** 2026-02-04 00:59:40.451010 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.451017 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.451024 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.451031 | orchestrator | 2026-02-04 00:59:40.451038 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-04 00:59:40.451046 | orchestrator | Wednesday 04 February 2026 00:59:08 +0000 (0:00:02.938) 0:07:23.795 **** 2026-02-04 00:59:40.451054 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.451061 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.451068 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.451074 | orchestrator | 2026-02-04 00:59:40.451082 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-04 00:59:40.451088 | orchestrator | Wednesday 04 February 2026 00:59:22 +0000 (0:00:14.084) 0:07:37.879 **** 2026-02-04 00:59:40.451096 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.451112 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.451120 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.451127 | orchestrator | 2026-02-04 00:59:40.451134 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-04 00:59:40.451141 | orchestrator | Wednesday 04 February 2026 00:59:23 +0000 (0:00:01.372) 0:07:39.252 **** 2026-02-04 00:59:40.451147 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:59:40.451154 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:59:40.451160 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:59:40.451167 | orchestrator | 2026-02-04 00:59:40.451174 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-04 00:59:40.451180 | orchestrator | Wednesday 04 February 2026 00:59:32 +0000 (0:00:08.732) 0:07:47.985 **** 2026-02-04 00:59:40.451188 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.451195 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.451202 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.451208 | orchestrator | 2026-02-04 00:59:40.451215 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-04 00:59:40.451222 | orchestrator | Wednesday 04 February 2026 00:59:33 +0000 (0:00:00.387) 0:07:48.373 **** 2026-02-04 00:59:40.451228 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.451234 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.451241 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.451248 | orchestrator | 2026-02-04 00:59:40.451254 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-04 00:59:40.451260 | orchestrator | Wednesday 04 February 2026 00:59:33 +0000 (0:00:00.429) 0:07:48.803 **** 2026-02-04 00:59:40.451267 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.451273 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.451280 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.451288 | orchestrator | 2026-02-04 00:59:40.451295 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-04 00:59:40.451310 | orchestrator | Wednesday 04 February 2026 00:59:34 +0000 (0:00:00.907) 0:07:49.710 **** 2026-02-04 00:59:40.451318 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.451324 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.451331 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.451338 | orchestrator | 2026-02-04 00:59:40.451344 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-04 00:59:40.451358 | orchestrator | Wednesday 04 February 2026 00:59:34 +0000 (0:00:00.453) 0:07:50.164 **** 2026-02-04 00:59:40.451367 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.451374 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.451381 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.451387 | orchestrator | 2026-02-04 00:59:40.451394 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-04 00:59:40.451401 | orchestrator | Wednesday 04 February 2026 00:59:35 +0000 (0:00:00.418) 0:07:50.583 **** 2026-02-04 00:59:40.451409 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:59:40.451416 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:59:40.451423 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:59:40.451430 | orchestrator | 2026-02-04 00:59:40.451436 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-04 00:59:40.451444 | orchestrator | Wednesday 04 February 2026 00:59:35 +0000 (0:00:00.478) 0:07:51.061 **** 2026-02-04 00:59:40.451451 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.451459 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.451466 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.451473 | orchestrator | 2026-02-04 00:59:40.451481 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-04 00:59:40.451487 | orchestrator | Wednesday 04 February 2026 00:59:37 +0000 (0:00:01.734) 0:07:52.796 **** 2026-02-04 00:59:40.451495 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:59:40.451501 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:59:40.451508 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:59:40.451514 | orchestrator | 2026-02-04 00:59:40.451520 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:59:40.451528 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-04 00:59:40.451536 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-04 00:59:40.451543 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-04 00:59:40.451550 | orchestrator | 2026-02-04 00:59:40.451556 | orchestrator | 2026-02-04 00:59:40.451562 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:59:40.451569 | orchestrator | Wednesday 04 February 2026 00:59:38 +0000 (0:00:00.953) 0:07:53.749 **** 2026-02-04 00:59:40.451576 | orchestrator | =============================================================================== 2026-02-04 00:59:40.451583 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.08s 2026-02-04 00:59:40.451590 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 9.70s 2026-02-04 00:59:40.451596 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.73s 2026-02-04 00:59:40.451602 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.91s 2026-02-04 00:59:40.451608 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 7.31s 2026-02-04 00:59:40.451614 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.84s 2026-02-04 00:59:40.451620 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.67s 2026-02-04 00:59:40.451627 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.54s 2026-02-04 00:59:40.451642 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 5.63s 2026-02-04 00:59:40.451657 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.50s 2026-02-04 00:59:40.451667 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.40s 2026-02-04 00:59:40.451674 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.39s 2026-02-04 00:59:40.451680 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.28s 2026-02-04 00:59:40.451686 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.16s 2026-02-04 00:59:40.451738 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.04s 2026-02-04 00:59:40.451748 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.90s 2026-02-04 00:59:40.451755 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.79s 2026-02-04 00:59:40.451761 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.78s 2026-02-04 00:59:40.451767 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.74s 2026-02-04 00:59:40.451773 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.66s 2026-02-04 00:59:40.451780 | orchestrator | 2026-02-04 00:59:40 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:40.451786 | orchestrator | 2026-02-04 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:43.497060 | orchestrator | 2026-02-04 00:59:43 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 00:59:43.500600 | orchestrator | 2026-02-04 00:59:43 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 00:59:43.504963 | orchestrator | 2026-02-04 00:59:43 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:43.505041 | orchestrator | 2026-02-04 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:46.559526 | orchestrator | 2026-02-04 00:59:46 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 00:59:46.564271 | orchestrator | 2026-02-04 00:59:46 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 00:59:46.565874 | orchestrator | 2026-02-04 00:59:46 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:46.566290 | orchestrator | 2026-02-04 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:49.621002 | orchestrator | 2026-02-04 00:59:49 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 00:59:49.623795 | orchestrator | 2026-02-04 00:59:49 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 00:59:49.627105 | orchestrator | 2026-02-04 00:59:49 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:49.627595 | orchestrator | 2026-02-04 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:52.663891 | orchestrator | 2026-02-04 00:59:52 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 00:59:52.664268 | orchestrator | 2026-02-04 00:59:52 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 00:59:52.665177 | orchestrator | 2026-02-04 00:59:52 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:52.665220 | orchestrator | 2026-02-04 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:55.716297 | orchestrator | 2026-02-04 00:59:55 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 00:59:55.717117 | orchestrator | 2026-02-04 00:59:55 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 00:59:55.725825 | orchestrator | 2026-02-04 00:59:55 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:55.725912 | orchestrator | 2026-02-04 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:58.759842 | orchestrator | 2026-02-04 00:59:58 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 00:59:58.761353 | orchestrator | 2026-02-04 00:59:58 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 00:59:58.762133 | orchestrator | 2026-02-04 00:59:58 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 00:59:58.762169 | orchestrator | 2026-02-04 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:01.809082 | orchestrator | 2026-02-04 01:00:01 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:01.811961 | orchestrator | 2026-02-04 01:00:01 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:01.813133 | orchestrator | 2026-02-04 01:00:01 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:01.813346 | orchestrator | 2026-02-04 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:04.872685 | orchestrator | 2026-02-04 01:00:04 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:04.872748 | orchestrator | 2026-02-04 01:00:04 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:04.875345 | orchestrator | 2026-02-04 01:00:04 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:04.875396 | orchestrator | 2026-02-04 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:07.929014 | orchestrator | 2026-02-04 01:00:07 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:07.930212 | orchestrator | 2026-02-04 01:00:07 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:07.931861 | orchestrator | 2026-02-04 01:00:07 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:07.931897 | orchestrator | 2026-02-04 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:10.967462 | orchestrator | 2026-02-04 01:00:10 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:10.967559 | orchestrator | 2026-02-04 01:00:10 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:10.968182 | orchestrator | 2026-02-04 01:00:10 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:10.968234 | orchestrator | 2026-02-04 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:14.014374 | orchestrator | 2026-02-04 01:00:14 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:14.015848 | orchestrator | 2026-02-04 01:00:14 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:14.016600 | orchestrator | 2026-02-04 01:00:14 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:14.016719 | orchestrator | 2026-02-04 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:17.080300 | orchestrator | 2026-02-04 01:00:17 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:17.082383 | orchestrator | 2026-02-04 01:00:17 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:17.083320 | orchestrator | 2026-02-04 01:00:17 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:17.083363 | orchestrator | 2026-02-04 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:20.225000 | orchestrator | 2026-02-04 01:00:20 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:20.225086 | orchestrator | 2026-02-04 01:00:20 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:20.225096 | orchestrator | 2026-02-04 01:00:20 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:20.225105 | orchestrator | 2026-02-04 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:23.277543 | orchestrator | 2026-02-04 01:00:23 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:23.281681 | orchestrator | 2026-02-04 01:00:23 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:23.286240 | orchestrator | 2026-02-04 01:00:23 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:23.286289 | orchestrator | 2026-02-04 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:26.337879 | orchestrator | 2026-02-04 01:00:26 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:26.340044 | orchestrator | 2026-02-04 01:00:26 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:26.345192 | orchestrator | 2026-02-04 01:00:26 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:26.345246 | orchestrator | 2026-02-04 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:29.408019 | orchestrator | 2026-02-04 01:00:29 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:29.412212 | orchestrator | 2026-02-04 01:00:29 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:29.413226 | orchestrator | 2026-02-04 01:00:29 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:29.413569 | orchestrator | 2026-02-04 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:32.472271 | orchestrator | 2026-02-04 01:00:32 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:32.474823 | orchestrator | 2026-02-04 01:00:32 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:32.476441 | orchestrator | 2026-02-04 01:00:32 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:32.476709 | orchestrator | 2026-02-04 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:35.514296 | orchestrator | 2026-02-04 01:00:35 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:35.516421 | orchestrator | 2026-02-04 01:00:35 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:35.518788 | orchestrator | 2026-02-04 01:00:35 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:35.518829 | orchestrator | 2026-02-04 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:38.582694 | orchestrator | 2026-02-04 01:00:38 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:38.583800 | orchestrator | 2026-02-04 01:00:38 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:38.585323 | orchestrator | 2026-02-04 01:00:38 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:38.585375 | orchestrator | 2026-02-04 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:41.632435 | orchestrator | 2026-02-04 01:00:41 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:41.634180 | orchestrator | 2026-02-04 01:00:41 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:41.637696 | orchestrator | 2026-02-04 01:00:41 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:41.638056 | orchestrator | 2026-02-04 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:44.690345 | orchestrator | 2026-02-04 01:00:44 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:44.693250 | orchestrator | 2026-02-04 01:00:44 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:44.694180 | orchestrator | 2026-02-04 01:00:44 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:44.696433 | orchestrator | 2026-02-04 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:47.758866 | orchestrator | 2026-02-04 01:00:47 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:47.760310 | orchestrator | 2026-02-04 01:00:47 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:47.761751 | orchestrator | 2026-02-04 01:00:47 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:47.761782 | orchestrator | 2026-02-04 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:50.813394 | orchestrator | 2026-02-04 01:00:50 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:50.815890 | orchestrator | 2026-02-04 01:00:50 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:50.817313 | orchestrator | 2026-02-04 01:00:50 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:50.817350 | orchestrator | 2026-02-04 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:53.877870 | orchestrator | 2026-02-04 01:00:53 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:53.879638 | orchestrator | 2026-02-04 01:00:53 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:53.881210 | orchestrator | 2026-02-04 01:00:53 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:53.881270 | orchestrator | 2026-02-04 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:56.940320 | orchestrator | 2026-02-04 01:00:56 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:00:56.942165 | orchestrator | 2026-02-04 01:00:56 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:00:56.944066 | orchestrator | 2026-02-04 01:00:56 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:00:56.944117 | orchestrator | 2026-02-04 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:00.029673 | orchestrator | 2026-02-04 01:01:00 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:00.030892 | orchestrator | 2026-02-04 01:01:00 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:00.032304 | orchestrator | 2026-02-04 01:01:00 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:01:00.032403 | orchestrator | 2026-02-04 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:03.106096 | orchestrator | 2026-02-04 01:01:03 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:03.106848 | orchestrator | 2026-02-04 01:01:03 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:03.107816 | orchestrator | 2026-02-04 01:01:03 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:01:03.107858 | orchestrator | 2026-02-04 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:06.157747 | orchestrator | 2026-02-04 01:01:06 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:06.158106 | orchestrator | 2026-02-04 01:01:06 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:06.159348 | orchestrator | 2026-02-04 01:01:06 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:01:06.160358 | orchestrator | 2026-02-04 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:09.228700 | orchestrator | 2026-02-04 01:01:09 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:09.228752 | orchestrator | 2026-02-04 01:01:09 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:09.231194 | orchestrator | 2026-02-04 01:01:09 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:01:09.231285 | orchestrator | 2026-02-04 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:12.273007 | orchestrator | 2026-02-04 01:01:12 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:12.276515 | orchestrator | 2026-02-04 01:01:12 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:12.278916 | orchestrator | 2026-02-04 01:01:12 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:01:12.279541 | orchestrator | 2026-02-04 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:15.328421 | orchestrator | 2026-02-04 01:01:15 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:15.331333 | orchestrator | 2026-02-04 01:01:15 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:15.333038 | orchestrator | 2026-02-04 01:01:15 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:01:15.333070 | orchestrator | 2026-02-04 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:18.384996 | orchestrator | 2026-02-04 01:01:18 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:18.388228 | orchestrator | 2026-02-04 01:01:18 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:18.390958 | orchestrator | 2026-02-04 01:01:18 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:01:18.391030 | orchestrator | 2026-02-04 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:21.441221 | orchestrator | 2026-02-04 01:01:21 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:21.441820 | orchestrator | 2026-02-04 01:01:21 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:21.445108 | orchestrator | 2026-02-04 01:01:21 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:01:21.445181 | orchestrator | 2026-02-04 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:24.511781 | orchestrator | 2026-02-04 01:01:24 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:24.515906 | orchestrator | 2026-02-04 01:01:24 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:24.521094 | orchestrator | 2026-02-04 01:01:24 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:01:24.521214 | orchestrator | 2026-02-04 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:27.564560 | orchestrator | 2026-02-04 01:01:27 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:27.566337 | orchestrator | 2026-02-04 01:01:27 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:27.568456 | orchestrator | 2026-02-04 01:01:27 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state STARTED 2026-02-04 01:01:27.568503 | orchestrator | 2026-02-04 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:30.631888 | orchestrator | 2026-02-04 01:01:30 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:30.635901 | orchestrator | 2026-02-04 01:01:30 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:30.645398 | orchestrator | 2026-02-04 01:01:30 | INFO  | Task 4b82724b-ecb1-4316-873a-4f7a89ed7b41 is in state SUCCESS 2026-02-04 01:01:30.647512 | orchestrator | 2026-02-04 01:01:30.647601 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 01:01:30.647609 | orchestrator | 2.16.14 2026-02-04 01:01:30.647615 | orchestrator | 2026-02-04 01:01:30.647620 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-04 01:01:30.647626 | orchestrator | 2026-02-04 01:01:30.647631 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-04 01:01:30.647636 | orchestrator | Wednesday 04 February 2026 00:48:52 +0000 (0:00:01.002) 0:00:01.002 **** 2026-02-04 01:01:30.647642 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.647648 | orchestrator | 2026-02-04 01:01:30.647653 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-04 01:01:30.647666 | orchestrator | Wednesday 04 February 2026 00:48:53 +0000 (0:00:01.567) 0:00:02.569 **** 2026-02-04 01:01:30.647672 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.647678 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.647683 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.647688 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.647693 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.647698 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.647703 | orchestrator | 2026-02-04 01:01:30.647708 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-04 01:01:30.647713 | orchestrator | Wednesday 04 February 2026 00:48:56 +0000 (0:00:02.825) 0:00:05.395 **** 2026-02-04 01:01:30.647718 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.647723 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.647728 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.647733 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.647738 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.647742 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.647747 | orchestrator | 2026-02-04 01:01:30.647752 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 01:01:30.647757 | orchestrator | Wednesday 04 February 2026 00:48:57 +0000 (0:00:01.178) 0:00:06.573 **** 2026-02-04 01:01:30.647762 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.647767 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.647802 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.647811 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.647835 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.647843 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.647851 | orchestrator | 2026-02-04 01:01:30.647859 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 01:01:30.647867 | orchestrator | Wednesday 04 February 2026 00:48:59 +0000 (0:00:01.227) 0:00:07.800 **** 2026-02-04 01:01:30.647874 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.647884 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.647892 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.647901 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.647910 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.647919 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.647928 | orchestrator | 2026-02-04 01:01:30.647937 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-04 01:01:30.647945 | orchestrator | Wednesday 04 February 2026 00:49:00 +0000 (0:00:01.318) 0:00:09.118 **** 2026-02-04 01:01:30.647954 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.647963 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.647972 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.647981 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.648072 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.648081 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.648091 | orchestrator | 2026-02-04 01:01:30.648100 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-04 01:01:30.648109 | orchestrator | Wednesday 04 February 2026 00:49:01 +0000 (0:00:00.901) 0:00:10.020 **** 2026-02-04 01:01:30.648116 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.648122 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.648128 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.648134 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.648140 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.648146 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.648152 | orchestrator | 2026-02-04 01:01:30.648158 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-04 01:01:30.648164 | orchestrator | Wednesday 04 February 2026 00:49:02 +0000 (0:00:01.295) 0:00:11.316 **** 2026-02-04 01:01:30.648171 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.648179 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.648189 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.648201 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.648208 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.648217 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.648225 | orchestrator | 2026-02-04 01:01:30.648231 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-04 01:01:30.648239 | orchestrator | Wednesday 04 February 2026 00:49:03 +0000 (0:00:01.297) 0:00:12.613 **** 2026-02-04 01:01:30.648276 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.648285 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.648292 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.648299 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.648305 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.648311 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.648316 | orchestrator | 2026-02-04 01:01:30.648322 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-04 01:01:30.648328 | orchestrator | Wednesday 04 February 2026 00:49:06 +0000 (0:00:02.264) 0:00:14.878 **** 2026-02-04 01:01:30.648335 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 01:01:30.648340 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 01:01:30.648346 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 01:01:30.648352 | orchestrator | 2026-02-04 01:01:30.648357 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-04 01:01:30.648363 | orchestrator | Wednesday 04 February 2026 00:49:07 +0000 (0:00:01.129) 0:00:16.007 **** 2026-02-04 01:01:30.648376 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.648382 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.648388 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.648394 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.648411 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.648417 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.648423 | orchestrator | 2026-02-04 01:01:30.648429 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-04 01:01:30.648435 | orchestrator | Wednesday 04 February 2026 00:49:09 +0000 (0:00:01.852) 0:00:17.859 **** 2026-02-04 01:01:30.648441 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 01:01:30.648447 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 01:01:30.648453 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 01:01:30.648458 | orchestrator | 2026-02-04 01:01:30.648464 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-04 01:01:30.648522 | orchestrator | Wednesday 04 February 2026 00:49:13 +0000 (0:00:04.339) 0:00:22.199 **** 2026-02-04 01:01:30.648549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 01:01:30.648558 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 01:01:30.648565 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 01:01:30.648572 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.648581 | orchestrator | 2026-02-04 01:01:30.648589 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-04 01:01:30.648598 | orchestrator | Wednesday 04 February 2026 00:49:14 +0000 (0:00:00.650) 0:00:22.849 **** 2026-02-04 01:01:30.648607 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.648616 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.648622 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.648627 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.648632 | orchestrator | 2026-02-04 01:01:30.648637 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-04 01:01:30.648642 | orchestrator | Wednesday 04 February 2026 00:49:15 +0000 (0:00:01.768) 0:00:24.617 **** 2026-02-04 01:01:30.648667 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.648675 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.648680 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.648691 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.648775 | orchestrator | 2026-02-04 01:01:30.648783 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-04 01:01:30.648791 | orchestrator | Wednesday 04 February 2026 00:49:16 +0000 (0:00:00.291) 0:00:24.909 **** 2026-02-04 01:01:30.648808 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-04 00:49:09.991038', 'end': '2026-02-04 00:49:10.208004', 'delta': '0:00:00.216966', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.648823 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-04 00:49:11.660342', 'end': '2026-02-04 00:49:11.941749', 'delta': '0:00:00.281407', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.648833 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-04 00:49:12.766926', 'end': '2026-02-04 00:49:13.083817', 'delta': '0:00:00.316891', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.648842 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.648850 | orchestrator | 2026-02-04 01:01:30.648857 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-04 01:01:30.648862 | orchestrator | Wednesday 04 February 2026 00:49:17 +0000 (0:00:00.875) 0:00:25.784 **** 2026-02-04 01:01:30.648867 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.648872 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.648877 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.648882 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.648887 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.648892 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.648897 | orchestrator | 2026-02-04 01:01:30.648902 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-04 01:01:30.648907 | orchestrator | Wednesday 04 February 2026 00:49:19 +0000 (0:00:02.743) 0:00:28.527 **** 2026-02-04 01:01:30.648912 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.648917 | orchestrator | 2026-02-04 01:01:30.648922 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-04 01:01:30.648927 | orchestrator | Wednesday 04 February 2026 00:49:20 +0000 (0:00:01.072) 0:00:29.600 **** 2026-02-04 01:01:30.648931 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.648942 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.648947 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.648952 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.648957 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.648962 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.648967 | orchestrator | 2026-02-04 01:01:30.648972 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-04 01:01:30.648977 | orchestrator | Wednesday 04 February 2026 00:49:22 +0000 (0:00:02.162) 0:00:31.763 **** 2026-02-04 01:01:30.648982 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.648987 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.648992 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.648997 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.649002 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.649007 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.649012 | orchestrator | 2026-02-04 01:01:30.649017 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 01:01:30.649022 | orchestrator | Wednesday 04 February 2026 00:49:25 +0000 (0:00:02.358) 0:00:34.122 **** 2026-02-04 01:01:30.649027 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.649032 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.649036 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.649041 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.649046 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.649051 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.649056 | orchestrator | 2026-02-04 01:01:30.649061 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-04 01:01:30.649066 | orchestrator | Wednesday 04 February 2026 00:49:28 +0000 (0:00:02.923) 0:00:37.045 **** 2026-02-04 01:01:30.649071 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.649076 | orchestrator | 2026-02-04 01:01:30.649081 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-04 01:01:30.649086 | orchestrator | Wednesday 04 February 2026 00:49:28 +0000 (0:00:00.540) 0:00:37.586 **** 2026-02-04 01:01:30.649109 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.649115 | orchestrator | 2026-02-04 01:01:30.649120 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 01:01:30.649125 | orchestrator | Wednesday 04 February 2026 00:49:29 +0000 (0:00:00.752) 0:00:38.338 **** 2026-02-04 01:01:30.649130 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.649135 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.649140 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.649145 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.649150 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.649157 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.649165 | orchestrator | 2026-02-04 01:01:30.649237 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-04 01:01:30.649248 | orchestrator | Wednesday 04 February 2026 00:49:30 +0000 (0:00:01.075) 0:00:39.414 **** 2026-02-04 01:01:30.649256 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.649264 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.649272 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.649280 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.649288 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.649297 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.649363 | orchestrator | 2026-02-04 01:01:30.649369 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-04 01:01:30.649375 | orchestrator | Wednesday 04 February 2026 00:49:32 +0000 (0:00:01.641) 0:00:41.055 **** 2026-02-04 01:01:30.649379 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.649388 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.649393 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.649398 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.649408 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.649413 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.649417 | orchestrator | 2026-02-04 01:01:30.649422 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-04 01:01:30.649427 | orchestrator | Wednesday 04 February 2026 00:49:33 +0000 (0:00:01.654) 0:00:42.710 **** 2026-02-04 01:01:30.649432 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.649437 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.649442 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.649447 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.649452 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.649457 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.649462 | orchestrator | 2026-02-04 01:01:30.649467 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-04 01:01:30.649471 | orchestrator | Wednesday 04 February 2026 00:49:36 +0000 (0:00:02.081) 0:00:44.791 **** 2026-02-04 01:01:30.649476 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.649481 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.649486 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.649491 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.649496 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.649501 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.649506 | orchestrator | 2026-02-04 01:01:30.649511 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-04 01:01:30.649515 | orchestrator | Wednesday 04 February 2026 00:49:37 +0000 (0:00:01.099) 0:00:45.891 **** 2026-02-04 01:01:30.649520 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.649525 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.649543 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.649548 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.649553 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.649558 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.649563 | orchestrator | 2026-02-04 01:01:30.649568 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-04 01:01:30.649573 | orchestrator | Wednesday 04 February 2026 00:49:38 +0000 (0:00:01.294) 0:00:47.185 **** 2026-02-04 01:01:30.649578 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.649583 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.649587 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.649592 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.649597 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.649602 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.649607 | orchestrator | 2026-02-04 01:01:30.649612 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-04 01:01:30.649617 | orchestrator | Wednesday 04 February 2026 00:49:39 +0000 (0:00:01.083) 0:00:48.268 **** 2026-02-04 01:01:30.649622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30', 'scsi-SQEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part1', 'scsi-SQEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part14', 'scsi-SQEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part15', 'scsi-SQEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part16', 'scsi-SQEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.649706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.649715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995', 'scsi-SQEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part1', 'scsi-SQEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part14', 'scsi-SQEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part15', 'scsi-SQEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part16', 'scsi-SQEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.649824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.649829 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.649834 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.649839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cab1220b--9ff6--5009--b197--fa753e4036d2-osd--block--cab1220b--9ff6--5009--b197--fa753e4036d2', 'dm-uuid-LVM-i1ir8cW1PvWS9XJjL7rtGfPs74IrwS1OtXRgctxodwlzbnYu05YC6ITqVCjt3Ewp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4adee4b4--d62b--5502--a742--8ac6c3138b01-osd--block--4adee4b4--d62b--5502--a742--8ac6c3138b01', 'dm-uuid-LVM-SU4etYSpWEq0QUIDoovTGPho7gvfQS4CfqyRGhgWRYEgUBuM6qPphK1xLHiYiX7n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.649962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750', 'scsi-SQEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6cd3944c--50dd--590e--9699--94e09e9b1959-osd--block--6cd3944c--50dd--590e--9699--94e09e9b1959', 'dm-uuid-LVM-XgXlScZWWizuOO8Naf9sj1Y6ACIIQX3P3IkOOLP61fS3F3tEGJHex2gC82E55wff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--197bc0b1--bda8--5def--b850--786176b935dd-osd--block--197bc0b1--bda8--5def--b850--786176b935dd', 'dm-uuid-LVM-nemC0iKe6zNA0EcGvn8fmzYHB56fDHxebgnqpGfAfDUnXsi33ExeGzK6cfU0FmVZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cab1220b--9ff6--5009--b197--fa753e4036d2-osd--block--cab1220b--9ff6--5009--b197--fa753e4036d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rJP7yo-d0Io-2Sbh-p8jO-QRbP-JI2P-SK5YlT', 'scsi-0QEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52', 'scsi-SQEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4adee4b4--d62b--5502--a742--8ac6c3138b01-osd--block--4adee4b4--d62b--5502--a742--8ac6c3138b01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AbG5Ab-g41T-U6Ls-d9pt-UBR4-ZKCx-x9UiyH', 'scsi-0QEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd', 'scsi-SQEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089', 'scsi-SQEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part1', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part14', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part15', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part16', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6cd3944c--50dd--590e--9699--94e09e9b1959-osd--block--6cd3944c--50dd--590e--9699--94e09e9b1959'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-niJgie-P4tu-prGp-syH5-mr1x-Ue9N-Xoxej0', 'scsi-0QEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d', 'scsi-SQEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650911 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.650920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--197bc0b1--bda8--5def--b850--786176b935dd-osd--block--197bc0b1--bda8--5def--b850--786176b935dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4zvJig-R9CO-DeWu-dQTC-OC2s-SDYV-99Ae0P', 'scsi-0QEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d', 'scsi-SQEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650930 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.650940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4', 'scsi-SQEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.650965 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.650974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3daecb5--9fd0--5834--b191--078d341d10dc-osd--block--e3daecb5--9fd0--5834--b191--078d341d10dc', 'dm-uuid-LVM-b0VKYwSqdivqaHauLtP9AoYkjSg3Qhd1ajk3GtVp2Q0TYOfGU3ZcyDXlPdU0pGPI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--607d890d--3e41--57a1--9874--83b389fa50fb-osd--block--607d890d--3e41--57a1--9874--83b389fa50fb', 'dm-uuid-LVM-tcfEEFY9BrwSTyQheLvKc5mjGSniqt7Qw1sChWus7fPQM1wJdmFmQzYM75n7njop'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.650996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.651012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.651021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.651030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.651044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.651052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.651060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.651069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:01:30.651088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part1', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part14', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part15', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part16', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.651097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e3daecb5--9fd0--5834--b191--078d341d10dc-osd--block--e3daecb5--9fd0--5834--b191--078d341d10dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YLAFzg-kJmY-aUic-VBuH-g3uH-9m4L-hSfk98', 'scsi-0QEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08', 'scsi-SQEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.651103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--607d890d--3e41--57a1--9874--83b389fa50fb-osd--block--607d890d--3e41--57a1--9874--83b389fa50fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pScOt8-ITDZ-tXnq-6HO2-2rSN-88m4-bhVjVu', 'scsi-0QEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508', 'scsi-SQEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.651108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89', 'scsi-SQEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.651117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:01:30.651122 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.651127 | orchestrator | 2026-02-04 01:01:30.651133 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-04 01:01:30.651138 | orchestrator | Wednesday 04 February 2026 00:49:43 +0000 (0:00:03.675) 0:00:51.944 **** 2026-02-04 01:01:30.651146 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651152 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651160 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651166 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651171 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651176 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651184 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651191 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651199 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651205 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651210 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651221 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30', 'scsi-SQEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part1', 'scsi-SQEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part14', 'scsi-SQEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part15', 'scsi-SQEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part16', 'scsi-SQEMU_QEMU_HARDDISK_739b3430-b44e-4a37-a610-d4b8eb445a30-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651231 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651236 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651248 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651302 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651316 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651331 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995', 'scsi-SQEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part1', 'scsi-SQEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part14', 'scsi-SQEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part15', 'scsi-SQEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part16', 'scsi-SQEMU_QEMU_HARDDISK_672dc836-5b98-47e8-81c3-e5596cac2995-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651341 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651411 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651434 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651441 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651447 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651454 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651459 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651470 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651479 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651491 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750', 'scsi-SQEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9850f37-5fe6-4942-bfdc-bc374f48b750-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651498 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651504 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.651514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cab1220b--9ff6--5009--b197--fa753e4036d2-osd--block--cab1220b--9ff6--5009--b197--fa753e4036d2', 'dm-uuid-LVM-i1ir8cW1PvWS9XJjL7rtGfPs74IrwS1OtXRgctxodwlzbnYu05YC6ITqVCjt3Ewp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651525 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.651549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4adee4b4--d62b--5502--a742--8ac6c3138b01-osd--block--4adee4b4--d62b--5502--a742--8ac6c3138b01', 'dm-uuid-LVM-SU4etYSpWEq0QUIDoovTGPho7gvfQS4CfqyRGhgWRYEgUBuM6qPphK1xLHiYiX7n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651556 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651569 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651574 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651585 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651595 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.651601 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651607 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6cd3944c--50dd--590e--9699--94e09e9b1959-osd--block--6cd3944c--50dd--590e--9699--94e09e9b1959', 'dm-uuid-LVM-XgXlScZWWizuOO8Naf9sj1Y6ACIIQX3P3IkOOLP61fS3F3tEGJHex2gC82E55wff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651614 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--197bc0b1--bda8--5def--b850--786176b935dd-osd--block--197bc0b1--bda8--5def--b850--786176b935dd', 'dm-uuid-LVM-nemC0iKe6zNA0EcGvn8fmzYHB56fDHxebgnqpGfAfDUnXsi33ExeGzK6cfU0FmVZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651654 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651666 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.651677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652365 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cab1220b--9ff6--5009--b197--fa753e4036d2-osd--block--cab1220b--9ff6--5009--b197--fa753e4036d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rJP7yo-d0Io-2Sbh-p8jO-QRbP-JI2P-SK5YlT', 'scsi-0QEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52', 'scsi-SQEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652382 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4adee4b4--d62b--5502--a742--8ac6c3138b01-osd--block--4adee4b4--d62b--5502--a742--8ac6c3138b01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AbG5Ab-g41T-U6Ls-d9pt-UBR4-ZKCx-x9UiyH', 'scsi-0QEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd', 'scsi-SQEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652393 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3daecb5--9fd0--5834--b191--078d341d10dc-osd--block--e3daecb5--9fd0--5834--b191--078d341d10dc', 'dm-uuid-LVM-b0VKYwSqdivqaHauLtP9AoYkjSg3Qhd1ajk3GtVp2Q0TYOfGU3ZcyDXlPdU0pGPI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652399 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--607d890d--3e41--57a1--9874--83b389fa50fb-osd--block--607d890d--3e41--57a1--9874--83b389fa50fb', 'dm-uuid-LVM-tcfEEFY9BrwSTyQheLvKc5mjGSniqt7Qw1sChWus7fPQM1wJdmFmQzYM75n7njop'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089', 'scsi-SQEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652422 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652432 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.652438 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652460 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652468 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652473 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part1', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part14', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part15', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part16', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652479 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6cd3944c--50dd--590e--9699--94e09e9b1959-osd--block--6cd3944c--50dd--590e--9699--94e09e9b1959'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-niJgie-P4tu-prGp-syH5-mr1x-Ue9N-Xoxej0', 'scsi-0QEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d', 'scsi-SQEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652565 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652592 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--197bc0b1--bda8--5def--b850--786176b935dd-osd--block--197bc0b1--bda8--5def--b850--786176b935dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4zvJig-R9CO-DeWu-dQTC-OC2s-SDYV-99Ae0P', 'scsi-0QEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d', 'scsi-SQEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652601 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652610 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652652 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4', 'scsi-SQEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part1', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part14', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part15', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part16', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652681 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652690 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.652702 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e3daecb5--9fd0--5834--b191--078d341d10dc-osd--block--e3daecb5--9fd0--5834--b191--078d341d10dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YLAFzg-kJmY-aUic-VBuH-g3uH-9m4L-hSfk98', 'scsi-0QEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08', 'scsi-SQEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652712 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--607d890d--3e41--57a1--9874--83b389fa50fb-osd--block--607d890d--3e41--57a1--9874--83b389fa50fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pScOt8-ITDZ-tXnq-6HO2-2rSN-88m4-bhVjVu', 'scsi-0QEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508', 'scsi-SQEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652721 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89', 'scsi-SQEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652731 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:01:30.652740 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.652748 | orchestrator | 2026-02-04 01:01:30.652756 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-04 01:01:30.652786 | orchestrator | Wednesday 04 February 2026 00:49:45 +0000 (0:00:02.636) 0:00:54.580 **** 2026-02-04 01:01:30.652802 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.652811 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.652820 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.652828 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.652836 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.652844 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.652850 | orchestrator | 2026-02-04 01:01:30.652855 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-04 01:01:30.652860 | orchestrator | Wednesday 04 February 2026 00:49:47 +0000 (0:00:01.723) 0:00:56.304 **** 2026-02-04 01:01:30.652865 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.652870 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.652874 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.652879 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.652884 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.652889 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.652894 | orchestrator | 2026-02-04 01:01:30.652901 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 01:01:30.652906 | orchestrator | Wednesday 04 February 2026 00:49:49 +0000 (0:00:01.756) 0:00:58.060 **** 2026-02-04 01:01:30.652911 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.652916 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.652921 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.652926 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.652930 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.652935 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.652940 | orchestrator | 2026-02-04 01:01:30.652945 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 01:01:30.652950 | orchestrator | Wednesday 04 February 2026 00:49:52 +0000 (0:00:03.256) 0:01:01.317 **** 2026-02-04 01:01:30.652954 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.652959 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.652964 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.652969 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.652973 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.652978 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.652983 | orchestrator | 2026-02-04 01:01:30.652990 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 01:01:30.652995 | orchestrator | Wednesday 04 February 2026 00:49:53 +0000 (0:00:01.266) 0:01:02.583 **** 2026-02-04 01:01:30.653001 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.653007 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.653013 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.653019 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.653028 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.653034 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.653039 | orchestrator | 2026-02-04 01:01:30.653045 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 01:01:30.653051 | orchestrator | Wednesday 04 February 2026 00:49:55 +0000 (0:00:02.095) 0:01:04.678 **** 2026-02-04 01:01:30.653057 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.653062 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.653068 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.653074 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.653080 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.653085 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.653091 | orchestrator | 2026-02-04 01:01:30.653097 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 01:01:30.653103 | orchestrator | Wednesday 04 February 2026 00:49:57 +0000 (0:00:01.765) 0:01:06.443 **** 2026-02-04 01:01:30.653108 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-04 01:01:30.653115 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 01:01:30.653120 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-04 01:01:30.653127 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-04 01:01:30.653132 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-04 01:01:30.653138 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 01:01:30.653144 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-04 01:01:30.653150 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-04 01:01:30.653155 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-04 01:01:30.653161 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-04 01:01:30.653167 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 01:01:30.653173 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-04 01:01:30.653178 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-04 01:01:30.653185 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-04 01:01:30.653189 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-04 01:01:30.653194 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-04 01:01:30.653199 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-04 01:01:30.653204 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-04 01:01:30.653209 | orchestrator | 2026-02-04 01:01:30.653213 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 01:01:30.653218 | orchestrator | Wednesday 04 February 2026 00:50:03 +0000 (0:00:05.910) 0:01:12.354 **** 2026-02-04 01:01:30.653223 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 01:01:30.653228 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 01:01:30.653233 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 01:01:30.653238 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.653242 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-04 01:01:30.653247 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-04 01:01:30.653252 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-04 01:01:30.653257 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.653262 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-04 01:01:30.653266 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-04 01:01:30.653274 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-04 01:01:30.653279 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.653284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 01:01:30.653289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 01:01:30.653294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 01:01:30.653305 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-04 01:01:30.653309 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.653314 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-04 01:01:30.653319 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-04 01:01:30.653324 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.653329 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-04 01:01:30.653336 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-04 01:01:30.653341 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-04 01:01:30.653348 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.653356 | orchestrator | 2026-02-04 01:01:30.653368 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-04 01:01:30.653378 | orchestrator | Wednesday 04 February 2026 00:50:04 +0000 (0:00:01.329) 0:01:13.684 **** 2026-02-04 01:01:30.653386 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.653394 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.653401 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.653410 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.653418 | orchestrator | 2026-02-04 01:01:30.653426 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 01:01:30.653436 | orchestrator | Wednesday 04 February 2026 00:50:07 +0000 (0:00:02.086) 0:01:15.770 **** 2026-02-04 01:01:30.653444 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.653452 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.653460 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.653469 | orchestrator | 2026-02-04 01:01:30.653474 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 01:01:30.653479 | orchestrator | Wednesday 04 February 2026 00:50:07 +0000 (0:00:00.664) 0:01:16.435 **** 2026-02-04 01:01:30.653484 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.653489 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.653494 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.653499 | orchestrator | 2026-02-04 01:01:30.653503 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 01:01:30.653508 | orchestrator | Wednesday 04 February 2026 00:50:08 +0000 (0:00:00.665) 0:01:17.100 **** 2026-02-04 01:01:30.653513 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.653518 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.653523 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.653543 | orchestrator | 2026-02-04 01:01:30.653551 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 01:01:30.653557 | orchestrator | Wednesday 04 February 2026 00:50:09 +0000 (0:00:01.160) 0:01:18.261 **** 2026-02-04 01:01:30.653561 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.653567 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.653571 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.653576 | orchestrator | 2026-02-04 01:01:30.653581 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 01:01:30.653586 | orchestrator | Wednesday 04 February 2026 00:50:10 +0000 (0:00:01.256) 0:01:19.518 **** 2026-02-04 01:01:30.653591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:01:30.653596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:01:30.653601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:01:30.653605 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.653610 | orchestrator | 2026-02-04 01:01:30.653618 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 01:01:30.653630 | orchestrator | Wednesday 04 February 2026 00:50:11 +0000 (0:00:00.614) 0:01:20.132 **** 2026-02-04 01:01:30.653645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:01:30.653652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:01:30.653660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:01:30.653668 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.653675 | orchestrator | 2026-02-04 01:01:30.653682 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 01:01:30.653689 | orchestrator | Wednesday 04 February 2026 00:50:12 +0000 (0:00:00.665) 0:01:20.798 **** 2026-02-04 01:01:30.653696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:01:30.653704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:01:30.653711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:01:30.653719 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.653728 | orchestrator | 2026-02-04 01:01:30.653736 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 01:01:30.653744 | orchestrator | Wednesday 04 February 2026 00:50:13 +0000 (0:00:01.522) 0:01:22.320 **** 2026-02-04 01:01:30.653752 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.653761 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.653769 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.653778 | orchestrator | 2026-02-04 01:01:30.653786 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 01:01:30.653794 | orchestrator | Wednesday 04 February 2026 00:50:14 +0000 (0:00:00.544) 0:01:22.865 **** 2026-02-04 01:01:30.653802 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 01:01:30.653815 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-04 01:01:30.653824 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-04 01:01:30.653831 | orchestrator | 2026-02-04 01:01:30.653846 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-04 01:01:30.653855 | orchestrator | Wednesday 04 February 2026 00:50:15 +0000 (0:00:01.478) 0:01:24.343 **** 2026-02-04 01:01:30.653863 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 01:01:30.653870 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 01:01:30.653878 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 01:01:30.653886 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 01:01:30.653893 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 01:01:30.653906 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 01:01:30.653913 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 01:01:30.653921 | orchestrator | 2026-02-04 01:01:30.653928 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-04 01:01:30.653935 | orchestrator | Wednesday 04 February 2026 00:50:16 +0000 (0:00:01.117) 0:01:25.460 **** 2026-02-04 01:01:30.653944 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 01:01:30.653951 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 01:01:30.653958 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 01:01:30.653966 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 01:01:30.653974 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 01:01:30.653981 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 01:01:30.653990 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 01:01:30.653997 | orchestrator | 2026-02-04 01:01:30.654004 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 01:01:30.654079 | orchestrator | Wednesday 04 February 2026 00:50:19 +0000 (0:00:02.507) 0:01:27.968 **** 2026-02-04 01:01:30.654094 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.654102 | orchestrator | 2026-02-04 01:01:30.654111 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 01:01:30.654119 | orchestrator | Wednesday 04 February 2026 00:50:20 +0000 (0:00:01.485) 0:01:29.454 **** 2026-02-04 01:01:30.654125 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.654130 | orchestrator | 2026-02-04 01:01:30.654135 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 01:01:30.654140 | orchestrator | Wednesday 04 February 2026 00:50:22 +0000 (0:00:01.906) 0:01:31.360 **** 2026-02-04 01:01:30.654144 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.654149 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.654154 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.654159 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.654164 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.654169 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.654174 | orchestrator | 2026-02-04 01:01:30.654179 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 01:01:30.654183 | orchestrator | Wednesday 04 February 2026 00:50:24 +0000 (0:00:02.132) 0:01:33.492 **** 2026-02-04 01:01:30.654188 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.654193 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.654198 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.654203 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.654208 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.654213 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.654218 | orchestrator | 2026-02-04 01:01:30.654222 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 01:01:30.654227 | orchestrator | Wednesday 04 February 2026 00:50:27 +0000 (0:00:02.808) 0:01:36.301 **** 2026-02-04 01:01:30.654232 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.654237 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.654242 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.654247 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.654252 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.654257 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.654261 | orchestrator | 2026-02-04 01:01:30.654266 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 01:01:30.654271 | orchestrator | Wednesday 04 February 2026 00:50:29 +0000 (0:00:01.682) 0:01:37.983 **** 2026-02-04 01:01:30.654276 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.654281 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.654286 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.654291 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.654296 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.654300 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.654305 | orchestrator | 2026-02-04 01:01:30.654310 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 01:01:30.654315 | orchestrator | Wednesday 04 February 2026 00:50:30 +0000 (0:00:01.326) 0:01:39.309 **** 2026-02-04 01:01:30.654320 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.654325 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.654330 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.654335 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.654339 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.654344 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.654349 | orchestrator | 2026-02-04 01:01:30.654354 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 01:01:30.654374 | orchestrator | Wednesday 04 February 2026 00:50:32 +0000 (0:00:01.655) 0:01:40.965 **** 2026-02-04 01:01:30.654380 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.654385 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.654390 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.654394 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.654399 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.654404 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.654409 | orchestrator | 2026-02-04 01:01:30.654414 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 01:01:30.654419 | orchestrator | Wednesday 04 February 2026 00:50:33 +0000 (0:00:01.230) 0:01:42.195 **** 2026-02-04 01:01:30.654424 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.654429 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.654433 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.654442 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.654447 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.654452 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.654457 | orchestrator | 2026-02-04 01:01:30.654461 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 01:01:30.654466 | orchestrator | Wednesday 04 February 2026 00:50:34 +0000 (0:00:00.764) 0:01:42.960 **** 2026-02-04 01:01:30.654471 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.654476 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.654481 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.654486 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.654491 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.654496 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.654500 | orchestrator | 2026-02-04 01:01:30.654505 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 01:01:30.654510 | orchestrator | Wednesday 04 February 2026 00:50:36 +0000 (0:00:02.034) 0:01:44.994 **** 2026-02-04 01:01:30.654515 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.654520 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.654524 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.654561 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.654567 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.654572 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.654577 | orchestrator | 2026-02-04 01:01:30.654582 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 01:01:30.654587 | orchestrator | Wednesday 04 February 2026 00:50:38 +0000 (0:00:01.782) 0:01:46.777 **** 2026-02-04 01:01:30.654591 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.654596 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.654601 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.654606 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.654611 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.654616 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.654621 | orchestrator | 2026-02-04 01:01:30.654626 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 01:01:30.654630 | orchestrator | Wednesday 04 February 2026 00:50:40 +0000 (0:00:02.125) 0:01:48.903 **** 2026-02-04 01:01:30.654635 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.654640 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.654645 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.654650 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.654655 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.654662 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.654671 | orchestrator | 2026-02-04 01:01:30.654679 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 01:01:30.654687 | orchestrator | Wednesday 04 February 2026 00:50:41 +0000 (0:00:01.073) 0:01:49.976 **** 2026-02-04 01:01:30.654694 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.654702 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.654709 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.654723 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.654732 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.654741 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.654749 | orchestrator | 2026-02-04 01:01:30.654757 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 01:01:30.654766 | orchestrator | Wednesday 04 February 2026 00:50:42 +0000 (0:00:01.732) 0:01:51.709 **** 2026-02-04 01:01:30.654774 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.654780 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.654785 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.654790 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.654795 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.654800 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.654807 | orchestrator | 2026-02-04 01:01:30.654816 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 01:01:30.654828 | orchestrator | Wednesday 04 February 2026 00:50:44 +0000 (0:00:01.486) 0:01:53.196 **** 2026-02-04 01:01:30.654836 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.654844 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.654852 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.654859 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.654867 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.654873 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.654880 | orchestrator | 2026-02-04 01:01:30.654886 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 01:01:30.654893 | orchestrator | Wednesday 04 February 2026 00:50:46 +0000 (0:00:01.846) 0:01:55.042 **** 2026-02-04 01:01:30.654900 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.654907 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.654914 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.654921 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.654928 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.654934 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.654941 | orchestrator | 2026-02-04 01:01:30.654948 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 01:01:30.654955 | orchestrator | Wednesday 04 February 2026 00:50:47 +0000 (0:00:01.490) 0:01:56.533 **** 2026-02-04 01:01:30.654962 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.654968 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.654975 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.654981 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.654988 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.654994 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.655014 | orchestrator | 2026-02-04 01:01:30.655029 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 01:01:30.655037 | orchestrator | Wednesday 04 February 2026 00:50:49 +0000 (0:00:02.216) 0:01:58.749 **** 2026-02-04 01:01:30.655044 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.655051 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.655058 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.655065 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.655073 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.655080 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.655087 | orchestrator | 2026-02-04 01:01:30.655094 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 01:01:30.655101 | orchestrator | Wednesday 04 February 2026 00:50:51 +0000 (0:00:01.582) 0:02:00.332 **** 2026-02-04 01:01:30.655107 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.655114 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.655127 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.655134 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.655141 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.655148 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.655155 | orchestrator | 2026-02-04 01:01:30.655163 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 01:01:30.655176 | orchestrator | Wednesday 04 February 2026 00:50:53 +0000 (0:00:02.410) 0:02:02.742 **** 2026-02-04 01:01:30.655183 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.655190 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.655197 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.655205 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.655211 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.655219 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.655225 | orchestrator | 2026-02-04 01:01:30.655232 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-04 01:01:30.655239 | orchestrator | Wednesday 04 February 2026 00:50:56 +0000 (0:00:02.495) 0:02:05.238 **** 2026-02-04 01:01:30.655246 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.655253 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.655259 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.655266 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.655274 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.655281 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.655289 | orchestrator | 2026-02-04 01:01:30.655296 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-04 01:01:30.655303 | orchestrator | Wednesday 04 February 2026 00:51:00 +0000 (0:00:03.886) 0:02:09.125 **** 2026-02-04 01:01:30.655310 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.655316 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.655323 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.655330 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.655337 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.655344 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.655351 | orchestrator | 2026-02-04 01:01:30.655358 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-04 01:01:30.655365 | orchestrator | Wednesday 04 February 2026 00:51:04 +0000 (0:00:04.238) 0:02:13.363 **** 2026-02-04 01:01:30.655373 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.655380 | orchestrator | 2026-02-04 01:01:30.655387 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-04 01:01:30.655395 | orchestrator | Wednesday 04 February 2026 00:51:06 +0000 (0:00:02.152) 0:02:15.516 **** 2026-02-04 01:01:30.655401 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.655409 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.655417 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.655423 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.655429 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.655436 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.655443 | orchestrator | 2026-02-04 01:01:30.655450 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-04 01:01:30.655457 | orchestrator | Wednesday 04 February 2026 00:51:07 +0000 (0:00:00.730) 0:02:16.246 **** 2026-02-04 01:01:30.655463 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.655470 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.655477 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.655484 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.655491 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.655498 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.655507 | orchestrator | 2026-02-04 01:01:30.655515 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-04 01:01:30.655569 | orchestrator | Wednesday 04 February 2026 00:51:08 +0000 (0:00:00.981) 0:02:17.228 **** 2026-02-04 01:01:30.655602 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 01:01:30.655610 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 01:01:30.655627 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 01:01:30.655634 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 01:01:30.655642 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 01:01:30.655649 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 01:01:30.655656 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 01:01:30.655663 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 01:01:30.655670 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 01:01:30.655676 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 01:01:30.655683 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 01:01:30.655701 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 01:01:30.655708 | orchestrator | 2026-02-04 01:01:30.655716 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-04 01:01:30.655723 | orchestrator | Wednesday 04 February 2026 00:51:09 +0000 (0:00:01.522) 0:02:18.751 **** 2026-02-04 01:01:30.655730 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.655737 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.655743 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.655750 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.655756 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.655762 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.655769 | orchestrator | 2026-02-04 01:01:30.655775 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-04 01:01:30.655788 | orchestrator | Wednesday 04 February 2026 00:51:11 +0000 (0:00:01.494) 0:02:20.245 **** 2026-02-04 01:01:30.655794 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.655800 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.655806 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.655812 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.655819 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.655825 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.655831 | orchestrator | 2026-02-04 01:01:30.655837 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-04 01:01:30.655843 | orchestrator | Wednesday 04 February 2026 00:51:12 +0000 (0:00:00.695) 0:02:20.940 **** 2026-02-04 01:01:30.655848 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.655854 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.655860 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.655866 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.655872 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.655879 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.655884 | orchestrator | 2026-02-04 01:01:30.655891 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-04 01:01:30.655897 | orchestrator | Wednesday 04 February 2026 00:51:13 +0000 (0:00:01.181) 0:02:22.122 **** 2026-02-04 01:01:30.655903 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.655909 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.655915 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.655921 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.655928 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.655934 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.655941 | orchestrator | 2026-02-04 01:01:30.655947 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-04 01:01:30.655954 | orchestrator | Wednesday 04 February 2026 00:51:14 +0000 (0:00:00.696) 0:02:22.819 **** 2026-02-04 01:01:30.655961 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.655975 | orchestrator | 2026-02-04 01:01:30.655982 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-04 01:01:30.655989 | orchestrator | Wednesday 04 February 2026 00:51:15 +0000 (0:00:01.478) 0:02:24.297 **** 2026-02-04 01:01:30.655995 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.656002 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.656009 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.656016 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.656023 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.656029 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.656035 | orchestrator | 2026-02-04 01:01:30.656042 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-04 01:01:30.656048 | orchestrator | Wednesday 04 February 2026 00:51:54 +0000 (0:00:38.589) 0:03:02.887 **** 2026-02-04 01:01:30.656055 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 01:01:30.656061 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 01:01:30.656067 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 01:01:30.656074 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656081 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 01:01:30.656088 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 01:01:30.656095 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 01:01:30.656102 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 01:01:30.656108 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 01:01:30.656116 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 01:01:30.656120 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656124 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 01:01:30.656128 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 01:01:30.656132 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 01:01:30.656136 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656140 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 01:01:30.656144 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 01:01:30.656148 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 01:01:30.656152 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656156 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656160 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 01:01:30.656170 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 01:01:30.656175 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 01:01:30.656179 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656183 | orchestrator | 2026-02-04 01:01:30.656187 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-04 01:01:30.656190 | orchestrator | Wednesday 04 February 2026 00:51:55 +0000 (0:00:01.019) 0:03:03.907 **** 2026-02-04 01:01:30.656194 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656198 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656202 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656206 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656210 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656214 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656218 | orchestrator | 2026-02-04 01:01:30.656226 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-04 01:01:30.656234 | orchestrator | Wednesday 04 February 2026 00:51:56 +0000 (0:00:01.246) 0:03:05.155 **** 2026-02-04 01:01:30.656238 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656242 | orchestrator | 2026-02-04 01:01:30.656246 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-04 01:01:30.656250 | orchestrator | Wednesday 04 February 2026 00:51:56 +0000 (0:00:00.185) 0:03:05.341 **** 2026-02-04 01:01:30.656254 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656258 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656262 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656266 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656270 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656273 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656277 | orchestrator | 2026-02-04 01:01:30.656281 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-04 01:01:30.656285 | orchestrator | Wednesday 04 February 2026 00:51:57 +0000 (0:00:01.114) 0:03:06.456 **** 2026-02-04 01:01:30.656289 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656293 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656297 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656301 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656305 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656309 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656313 | orchestrator | 2026-02-04 01:01:30.656317 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-04 01:01:30.656321 | orchestrator | Wednesday 04 February 2026 00:51:58 +0000 (0:00:01.155) 0:03:07.611 **** 2026-02-04 01:01:30.656325 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656329 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656333 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656337 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656340 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656344 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656348 | orchestrator | 2026-02-04 01:01:30.656352 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-04 01:01:30.656356 | orchestrator | Wednesday 04 February 2026 00:51:59 +0000 (0:00:01.071) 0:03:08.682 **** 2026-02-04 01:01:30.656360 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.656364 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.656368 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.656372 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.656376 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.656380 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.656384 | orchestrator | 2026-02-04 01:01:30.656388 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-04 01:01:30.656392 | orchestrator | Wednesday 04 February 2026 00:52:03 +0000 (0:00:03.186) 0:03:11.869 **** 2026-02-04 01:01:30.656396 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.656400 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.656404 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.656408 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.656412 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.656416 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.656420 | orchestrator | 2026-02-04 01:01:30.656424 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-04 01:01:30.656428 | orchestrator | Wednesday 04 February 2026 00:52:03 +0000 (0:00:00.862) 0:03:12.731 **** 2026-02-04 01:01:30.656432 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.656437 | orchestrator | 2026-02-04 01:01:30.656441 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-04 01:01:30.656445 | orchestrator | Wednesday 04 February 2026 00:52:05 +0000 (0:00:01.669) 0:03:14.401 **** 2026-02-04 01:01:30.656455 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656459 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656463 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656467 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656471 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656475 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656479 | orchestrator | 2026-02-04 01:01:30.656482 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-04 01:01:30.656486 | orchestrator | Wednesday 04 February 2026 00:52:07 +0000 (0:00:01.440) 0:03:15.841 **** 2026-02-04 01:01:30.656490 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656494 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656498 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656502 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656506 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656510 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656514 | orchestrator | 2026-02-04 01:01:30.656518 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-04 01:01:30.656522 | orchestrator | Wednesday 04 February 2026 00:52:08 +0000 (0:00:01.078) 0:03:16.920 **** 2026-02-04 01:01:30.656526 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656544 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656552 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656557 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656562 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656569 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656573 | orchestrator | 2026-02-04 01:01:30.656577 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-04 01:01:30.656581 | orchestrator | Wednesday 04 February 2026 00:52:09 +0000 (0:00:00.853) 0:03:17.774 **** 2026-02-04 01:01:30.656585 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656589 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656593 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656597 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656601 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656605 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656609 | orchestrator | 2026-02-04 01:01:30.656613 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-04 01:01:30.656617 | orchestrator | Wednesday 04 February 2026 00:52:10 +0000 (0:00:01.279) 0:03:19.053 **** 2026-02-04 01:01:30.656623 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656627 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656631 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656635 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656639 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656643 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656647 | orchestrator | 2026-02-04 01:01:30.656651 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-04 01:01:30.656655 | orchestrator | Wednesday 04 February 2026 00:52:11 +0000 (0:00:00.960) 0:03:20.014 **** 2026-02-04 01:01:30.656659 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656663 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656667 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656671 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656675 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656679 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656683 | orchestrator | 2026-02-04 01:01:30.656687 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-04 01:01:30.656691 | orchestrator | Wednesday 04 February 2026 00:52:12 +0000 (0:00:01.263) 0:03:21.278 **** 2026-02-04 01:01:30.656695 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656699 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656706 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656711 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656714 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656718 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656722 | orchestrator | 2026-02-04 01:01:30.656726 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-04 01:01:30.656730 | orchestrator | Wednesday 04 February 2026 00:52:13 +0000 (0:00:01.184) 0:03:22.463 **** 2026-02-04 01:01:30.656734 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.656738 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.656742 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.656746 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.656750 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.656754 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.656758 | orchestrator | 2026-02-04 01:01:30.656762 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-04 01:01:30.656766 | orchestrator | Wednesday 04 February 2026 00:52:14 +0000 (0:00:01.292) 0:03:23.755 **** 2026-02-04 01:01:30.656770 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.656774 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.656778 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.656782 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.656786 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.656790 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.656794 | orchestrator | 2026-02-04 01:01:30.656798 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-04 01:01:30.656802 | orchestrator | Wednesday 04 February 2026 00:52:17 +0000 (0:00:02.109) 0:03:25.865 **** 2026-02-04 01:01:30.656807 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.656811 | orchestrator | 2026-02-04 01:01:30.656815 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-04 01:01:30.656819 | orchestrator | Wednesday 04 February 2026 00:52:19 +0000 (0:00:02.004) 0:03:27.869 **** 2026-02-04 01:01:30.656823 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-04 01:01:30.656827 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-04 01:01:30.656831 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-04 01:01:30.656835 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-04 01:01:30.656839 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-04 01:01:30.656843 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-04 01:01:30.656847 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-04 01:01:30.656851 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-04 01:01:30.656854 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-04 01:01:30.656858 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-04 01:01:30.656862 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-04 01:01:30.656866 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-04 01:01:30.656870 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-04 01:01:30.656874 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-04 01:01:30.656878 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-04 01:01:30.656882 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-04 01:01:30.656886 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-04 01:01:30.656890 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-04 01:01:30.656894 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-04 01:01:30.656898 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-04 01:01:30.656907 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-04 01:01:30.656911 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-04 01:01:30.656915 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-04 01:01:30.656919 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-04 01:01:30.656923 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-04 01:01:30.656929 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-04 01:01:30.656936 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-04 01:01:30.656942 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-04 01:01:30.656948 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-04 01:01:30.656957 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-04 01:01:30.656963 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-04 01:01:30.656969 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-04 01:01:30.656976 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-04 01:01:30.656981 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-04 01:01:30.656988 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-04 01:01:30.656994 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-04 01:01:30.657002 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-04 01:01:30.657008 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-04 01:01:30.657015 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-04 01:01:30.657022 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-04 01:01:30.657029 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-04 01:01:30.657035 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 01:01:30.657042 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-04 01:01:30.657050 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-04 01:01:30.657054 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-04 01:01:30.657058 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-04 01:01:30.657062 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-04 01:01:30.657066 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 01:01:30.657070 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 01:01:30.657074 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-04 01:01:30.657078 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 01:01:30.657082 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-04 01:01:30.657086 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 01:01:30.657090 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 01:01:30.657093 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 01:01:30.657097 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 01:01:30.657101 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 01:01:30.657105 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 01:01:30.657109 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 01:01:30.657115 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 01:01:30.657122 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 01:01:30.657129 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 01:01:30.657140 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 01:01:30.657145 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 01:01:30.657151 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 01:01:30.657157 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 01:01:30.657163 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 01:01:30.657171 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 01:01:30.657177 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 01:01:30.657184 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 01:01:30.657191 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 01:01:30.657197 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 01:01:30.657203 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 01:01:30.657209 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 01:01:30.657216 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 01:01:30.657223 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 01:01:30.657230 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 01:01:30.657236 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-04 01:01:30.657248 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 01:01:30.657255 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 01:01:30.657261 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 01:01:30.657267 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 01:01:30.657274 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 01:01:30.657280 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-04 01:01:30.657287 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-04 01:01:30.657293 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 01:01:30.657304 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-04 01:01:30.657310 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 01:01:30.657317 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-04 01:01:30.657324 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-04 01:01:30.657331 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-04 01:01:30.657338 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-04 01:01:30.657344 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-04 01:01:30.657351 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-04 01:01:30.657358 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-04 01:01:30.657364 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-04 01:01:30.657371 | orchestrator | 2026-02-04 01:01:30.657378 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-04 01:01:30.657383 | orchestrator | Wednesday 04 February 2026 00:52:27 +0000 (0:00:07.964) 0:03:35.834 **** 2026-02-04 01:01:30.657387 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657391 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657395 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657399 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-02-04 01:01:30.657403 | orchestrator | 2026-02-04 01:01:30.657410 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-04 01:01:30.657414 | orchestrator | Wednesday 04 February 2026 00:52:29 +0000 (0:00:02.187) 0:03:38.022 **** 2026-02-04 01:01:30.657418 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.657423 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.657427 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.657431 | orchestrator | 2026-02-04 01:01:30.657434 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-04 01:01:30.657438 | orchestrator | Wednesday 04 February 2026 00:52:30 +0000 (0:00:01.068) 0:03:39.091 **** 2026-02-04 01:01:30.657442 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.657446 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.657450 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.657454 | orchestrator | 2026-02-04 01:01:30.657458 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-04 01:01:30.657462 | orchestrator | Wednesday 04 February 2026 00:52:32 +0000 (0:00:02.002) 0:03:41.093 **** 2026-02-04 01:01:30.657466 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657470 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657474 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657478 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.657482 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.657486 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.657490 | orchestrator | 2026-02-04 01:01:30.657494 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-04 01:01:30.657498 | orchestrator | Wednesday 04 February 2026 00:52:33 +0000 (0:00:01.075) 0:03:42.169 **** 2026-02-04 01:01:30.657502 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657506 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657510 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657514 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.657518 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.657522 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.657526 | orchestrator | 2026-02-04 01:01:30.657544 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-04 01:01:30.657549 | orchestrator | Wednesday 04 February 2026 00:52:34 +0000 (0:00:01.193) 0:03:43.362 **** 2026-02-04 01:01:30.657556 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657563 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657569 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657576 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.657582 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.657588 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.657594 | orchestrator | 2026-02-04 01:01:30.657601 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-04 01:01:30.657608 | orchestrator | Wednesday 04 February 2026 00:52:35 +0000 (0:00:01.264) 0:03:44.627 **** 2026-02-04 01:01:30.657620 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657627 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657633 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657639 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.657643 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.657647 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.657651 | orchestrator | 2026-02-04 01:01:30.657659 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-04 01:01:30.657663 | orchestrator | Wednesday 04 February 2026 00:52:37 +0000 (0:00:01.465) 0:03:46.092 **** 2026-02-04 01:01:30.657667 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657671 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657675 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657679 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.657683 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.657690 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.657694 | orchestrator | 2026-02-04 01:01:30.657698 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-04 01:01:30.657703 | orchestrator | Wednesday 04 February 2026 00:52:38 +0000 (0:00:00.856) 0:03:46.949 **** 2026-02-04 01:01:30.657707 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657711 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657715 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657719 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.657723 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.657727 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.657731 | orchestrator | 2026-02-04 01:01:30.657735 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-04 01:01:30.657739 | orchestrator | Wednesday 04 February 2026 00:52:39 +0000 (0:00:01.253) 0:03:48.202 **** 2026-02-04 01:01:30.657743 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657747 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657752 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657756 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.657760 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.657764 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.657768 | orchestrator | 2026-02-04 01:01:30.657772 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-04 01:01:30.657776 | orchestrator | Wednesday 04 February 2026 00:52:40 +0000 (0:00:00.905) 0:03:49.108 **** 2026-02-04 01:01:30.657780 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657784 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657788 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657792 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.657796 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.657800 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.657805 | orchestrator | 2026-02-04 01:01:30.657809 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-04 01:01:30.657813 | orchestrator | Wednesday 04 February 2026 00:52:41 +0000 (0:00:00.991) 0:03:50.099 **** 2026-02-04 01:01:30.657817 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657821 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657825 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657829 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.657833 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.657839 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.657846 | orchestrator | 2026-02-04 01:01:30.657853 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-04 01:01:30.657859 | orchestrator | Wednesday 04 February 2026 00:52:44 +0000 (0:00:03.508) 0:03:53.607 **** 2026-02-04 01:01:30.657865 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657869 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657873 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.657877 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.657881 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657885 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.657889 | orchestrator | 2026-02-04 01:01:30.657893 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-04 01:01:30.657897 | orchestrator | Wednesday 04 February 2026 00:52:45 +0000 (0:00:01.005) 0:03:54.613 **** 2026-02-04 01:01:30.657904 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657908 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657912 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657916 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.657920 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.657925 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.657929 | orchestrator | 2026-02-04 01:01:30.657933 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-04 01:01:30.657937 | orchestrator | Wednesday 04 February 2026 00:52:47 +0000 (0:00:01.793) 0:03:56.406 **** 2026-02-04 01:01:30.657941 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657945 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657949 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657953 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.657957 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.657961 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.657965 | orchestrator | 2026-02-04 01:01:30.657969 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-04 01:01:30.657973 | orchestrator | Wednesday 04 February 2026 00:52:48 +0000 (0:00:00.888) 0:03:57.295 **** 2026-02-04 01:01:30.657977 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.657981 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.657985 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.657989 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.657994 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.657998 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.658002 | orchestrator | 2026-02-04 01:01:30.658008 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-04 01:01:30.658035 | orchestrator | Wednesday 04 February 2026 00:52:49 +0000 (0:00:01.317) 0:03:58.612 **** 2026-02-04 01:01:30.658041 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658045 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.658050 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-04 01:01:30.658059 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-04 01:01:30.658063 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.658068 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-04 01:01:30.658072 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-04 01:01:30.658076 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658080 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.658084 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-04 01:01:30.658093 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-04 01:01:30.658097 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.658101 | orchestrator | 2026-02-04 01:01:30.658105 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-04 01:01:30.658109 | orchestrator | Wednesday 04 February 2026 00:52:51 +0000 (0:00:01.317) 0:03:59.930 **** 2026-02-04 01:01:30.658113 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658117 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.658121 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.658125 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658129 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.658133 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.658137 | orchestrator | 2026-02-04 01:01:30.658141 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-04 01:01:30.658145 | orchestrator | Wednesday 04 February 2026 00:52:52 +0000 (0:00:01.400) 0:04:01.331 **** 2026-02-04 01:01:30.658149 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658153 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.658158 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.658162 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658166 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.658170 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.658174 | orchestrator | 2026-02-04 01:01:30.658178 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 01:01:30.658182 | orchestrator | Wednesday 04 February 2026 00:52:53 +0000 (0:00:00.767) 0:04:02.098 **** 2026-02-04 01:01:30.658186 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658190 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.658194 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.658198 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658202 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.658206 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.658210 | orchestrator | 2026-02-04 01:01:30.658214 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 01:01:30.658218 | orchestrator | Wednesday 04 February 2026 00:52:54 +0000 (0:00:01.208) 0:04:03.307 **** 2026-02-04 01:01:30.658222 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658226 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.658230 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.658234 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658238 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.658242 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.658246 | orchestrator | 2026-02-04 01:01:30.658250 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 01:01:30.658255 | orchestrator | Wednesday 04 February 2026 00:52:55 +0000 (0:00:01.011) 0:04:04.318 **** 2026-02-04 01:01:30.658259 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658266 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.658270 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.658274 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658278 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.658282 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.658286 | orchestrator | 2026-02-04 01:01:30.658290 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 01:01:30.658297 | orchestrator | Wednesday 04 February 2026 00:52:56 +0000 (0:00:01.209) 0:04:05.528 **** 2026-02-04 01:01:30.658301 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658305 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.658309 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.658313 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.658317 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.658321 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.658325 | orchestrator | 2026-02-04 01:01:30.658331 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 01:01:30.658335 | orchestrator | Wednesday 04 February 2026 00:52:57 +0000 (0:00:01.226) 0:04:06.754 **** 2026-02-04 01:01:30.658339 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 01:01:30.658343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 01:01:30.658347 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 01:01:30.658351 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658355 | orchestrator | 2026-02-04 01:01:30.658359 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 01:01:30.658363 | orchestrator | Wednesday 04 February 2026 00:52:59 +0000 (0:00:01.068) 0:04:07.823 **** 2026-02-04 01:01:30.658367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 01:01:30.658371 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 01:01:30.658375 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 01:01:30.658379 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658383 | orchestrator | 2026-02-04 01:01:30.658387 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 01:01:30.658391 | orchestrator | Wednesday 04 February 2026 00:53:00 +0000 (0:00:01.138) 0:04:08.961 **** 2026-02-04 01:01:30.658395 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 01:01:30.658399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 01:01:30.658403 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 01:01:30.658407 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658411 | orchestrator | 2026-02-04 01:01:30.658415 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 01:01:30.658419 | orchestrator | Wednesday 04 February 2026 00:53:00 +0000 (0:00:00.525) 0:04:09.486 **** 2026-02-04 01:01:30.658423 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658427 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.658431 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.658435 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.658439 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.658443 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.658447 | orchestrator | 2026-02-04 01:01:30.658451 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 01:01:30.658456 | orchestrator | Wednesday 04 February 2026 00:53:01 +0000 (0:00:00.825) 0:04:10.312 **** 2026-02-04 01:01:30.658460 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-04 01:01:30.658464 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658468 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-04 01:01:30.658472 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.658476 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-04 01:01:30.658480 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.658484 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 01:01:30.658488 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-04 01:01:30.658492 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-04 01:01:30.658496 | orchestrator | 2026-02-04 01:01:30.658500 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-04 01:01:30.658504 | orchestrator | Wednesday 04 February 2026 00:53:04 +0000 (0:00:02.669) 0:04:12.982 **** 2026-02-04 01:01:30.658512 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.658516 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.658520 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.658524 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.658558 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.658564 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.658568 | orchestrator | 2026-02-04 01:01:30.658572 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 01:01:30.658576 | orchestrator | Wednesday 04 February 2026 00:53:08 +0000 (0:00:04.057) 0:04:17.040 **** 2026-02-04 01:01:30.658580 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.658584 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.658588 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.658592 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.658599 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.658605 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.658612 | orchestrator | 2026-02-04 01:01:30.658618 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-04 01:01:30.658625 | orchestrator | Wednesday 04 February 2026 00:53:09 +0000 (0:00:01.195) 0:04:18.236 **** 2026-02-04 01:01:30.658631 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658636 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.658642 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.658648 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.658655 | orchestrator | 2026-02-04 01:01:30.658662 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-04 01:01:30.658669 | orchestrator | Wednesday 04 February 2026 00:53:10 +0000 (0:00:01.436) 0:04:19.673 **** 2026-02-04 01:01:30.658675 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.658682 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.658689 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.658698 | orchestrator | 2026-02-04 01:01:30.658702 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-04 01:01:30.658707 | orchestrator | Wednesday 04 February 2026 00:53:11 +0000 (0:00:00.382) 0:04:20.055 **** 2026-02-04 01:01:30.658710 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.658715 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.658718 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.658723 | orchestrator | 2026-02-04 01:01:30.658727 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-04 01:01:30.658731 | orchestrator | Wednesday 04 February 2026 00:53:12 +0000 (0:00:01.346) 0:04:21.402 **** 2026-02-04 01:01:30.658735 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 01:01:30.658739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 01:01:30.658745 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 01:01:30.658750 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658754 | orchestrator | 2026-02-04 01:01:30.658758 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-04 01:01:30.658762 | orchestrator | Wednesday 04 February 2026 00:53:13 +0000 (0:00:01.314) 0:04:22.716 **** 2026-02-04 01:01:30.658766 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.658770 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.658774 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.658778 | orchestrator | 2026-02-04 01:01:30.658782 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-04 01:01:30.658786 | orchestrator | Wednesday 04 February 2026 00:53:14 +0000 (0:00:00.769) 0:04:23.485 **** 2026-02-04 01:01:30.658790 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.658794 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.658798 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.658802 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.658810 | orchestrator | 2026-02-04 01:01:30.658814 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-04 01:01:30.658818 | orchestrator | Wednesday 04 February 2026 00:53:15 +0000 (0:00:01.050) 0:04:24.536 **** 2026-02-04 01:01:30.658822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:01:30.658826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:01:30.658830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:01:30.658834 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658838 | orchestrator | 2026-02-04 01:01:30.658842 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-04 01:01:30.658846 | orchestrator | Wednesday 04 February 2026 00:53:16 +0000 (0:00:00.783) 0:04:25.319 **** 2026-02-04 01:01:30.658850 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658854 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.658858 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.658862 | orchestrator | 2026-02-04 01:01:30.658868 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-04 01:01:30.658875 | orchestrator | Wednesday 04 February 2026 00:53:17 +0000 (0:00:00.760) 0:04:26.080 **** 2026-02-04 01:01:30.658881 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658888 | orchestrator | 2026-02-04 01:01:30.658894 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-04 01:01:30.658901 | orchestrator | Wednesday 04 February 2026 00:53:17 +0000 (0:00:00.277) 0:04:26.357 **** 2026-02-04 01:01:30.658908 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658914 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.658920 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.658926 | orchestrator | 2026-02-04 01:01:30.658933 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-04 01:01:30.658940 | orchestrator | Wednesday 04 February 2026 00:53:17 +0000 (0:00:00.397) 0:04:26.754 **** 2026-02-04 01:01:30.658947 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658954 | orchestrator | 2026-02-04 01:01:30.658960 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-04 01:01:30.658966 | orchestrator | Wednesday 04 February 2026 00:53:18 +0000 (0:00:00.254) 0:04:27.009 **** 2026-02-04 01:01:30.658970 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658974 | orchestrator | 2026-02-04 01:01:30.658978 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-04 01:01:30.658982 | orchestrator | Wednesday 04 February 2026 00:53:18 +0000 (0:00:00.281) 0:04:27.290 **** 2026-02-04 01:01:30.658986 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.658990 | orchestrator | 2026-02-04 01:01:30.658994 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-04 01:01:30.658998 | orchestrator | Wednesday 04 February 2026 00:53:18 +0000 (0:00:00.129) 0:04:27.420 **** 2026-02-04 01:01:30.659002 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.659006 | orchestrator | 2026-02-04 01:01:30.659010 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-04 01:01:30.659014 | orchestrator | Wednesday 04 February 2026 00:53:18 +0000 (0:00:00.257) 0:04:27.678 **** 2026-02-04 01:01:30.659018 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.659022 | orchestrator | 2026-02-04 01:01:30.659026 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-04 01:01:30.659030 | orchestrator | Wednesday 04 February 2026 00:53:19 +0000 (0:00:00.269) 0:04:27.947 **** 2026-02-04 01:01:30.659034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:01:30.659038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:01:30.659043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:01:30.659047 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.659055 | orchestrator | 2026-02-04 01:01:30.659059 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-04 01:01:30.659063 | orchestrator | Wednesday 04 February 2026 00:53:20 +0000 (0:00:00.835) 0:04:28.782 **** 2026-02-04 01:01:30.659067 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.659074 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.659079 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.659083 | orchestrator | 2026-02-04 01:01:30.659087 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-04 01:01:30.659091 | orchestrator | Wednesday 04 February 2026 00:53:20 +0000 (0:00:00.846) 0:04:29.629 **** 2026-02-04 01:01:30.659095 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.659099 | orchestrator | 2026-02-04 01:01:30.659103 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-04 01:01:30.659106 | orchestrator | Wednesday 04 February 2026 00:53:21 +0000 (0:00:00.257) 0:04:29.886 **** 2026-02-04 01:01:30.659110 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.659114 | orchestrator | 2026-02-04 01:01:30.659118 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-04 01:01:30.659124 | orchestrator | Wednesday 04 February 2026 00:53:21 +0000 (0:00:00.246) 0:04:30.133 **** 2026-02-04 01:01:30.659128 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.659134 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.659141 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.659147 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.659153 | orchestrator | 2026-02-04 01:01:30.659159 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-04 01:01:30.659165 | orchestrator | Wednesday 04 February 2026 00:53:22 +0000 (0:00:01.275) 0:04:31.409 **** 2026-02-04 01:01:30.659171 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.659178 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.659185 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.659191 | orchestrator | 2026-02-04 01:01:30.659196 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-04 01:01:30.659200 | orchestrator | Wednesday 04 February 2026 00:53:23 +0000 (0:00:00.409) 0:04:31.818 **** 2026-02-04 01:01:30.659204 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.659208 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.659211 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.659215 | orchestrator | 2026-02-04 01:01:30.659219 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-04 01:01:30.659223 | orchestrator | Wednesday 04 February 2026 00:53:24 +0000 (0:00:01.494) 0:04:33.313 **** 2026-02-04 01:01:30.659227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:01:30.659230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:01:30.659234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:01:30.659238 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.659242 | orchestrator | 2026-02-04 01:01:30.659246 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-04 01:01:30.659250 | orchestrator | Wednesday 04 February 2026 00:53:25 +0000 (0:00:00.945) 0:04:34.258 **** 2026-02-04 01:01:30.659254 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.659261 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.659267 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.659273 | orchestrator | 2026-02-04 01:01:30.659279 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-04 01:01:30.659286 | orchestrator | Wednesday 04 February 2026 00:53:25 +0000 (0:00:00.372) 0:04:34.630 **** 2026-02-04 01:01:30.659292 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.659299 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.659305 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.659311 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.659317 | orchestrator | 2026-02-04 01:01:30.659321 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-04 01:01:30.659325 | orchestrator | Wednesday 04 February 2026 00:53:27 +0000 (0:00:01.376) 0:04:36.007 **** 2026-02-04 01:01:30.659329 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.659333 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.659337 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.659341 | orchestrator | 2026-02-04 01:01:30.659345 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-04 01:01:30.659349 | orchestrator | Wednesday 04 February 2026 00:53:27 +0000 (0:00:00.389) 0:04:36.396 **** 2026-02-04 01:01:30.659352 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.659356 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.659360 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.659364 | orchestrator | 2026-02-04 01:01:30.659368 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-04 01:01:30.659374 | orchestrator | Wednesday 04 February 2026 00:53:29 +0000 (0:00:01.718) 0:04:38.114 **** 2026-02-04 01:01:30.659380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:01:30.659387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:01:30.659393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:01:30.659400 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.659406 | orchestrator | 2026-02-04 01:01:30.659413 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-04 01:01:30.659419 | orchestrator | Wednesday 04 February 2026 00:53:30 +0000 (0:00:00.761) 0:04:38.876 **** 2026-02-04 01:01:30.659425 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.659432 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.659439 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.659445 | orchestrator | 2026-02-04 01:01:30.659452 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-04 01:01:30.659458 | orchestrator | Wednesday 04 February 2026 00:53:30 +0000 (0:00:00.448) 0:04:39.325 **** 2026-02-04 01:01:30.659465 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.659471 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.659478 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.659483 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.659486 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.659490 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.659494 | orchestrator | 2026-02-04 01:01:30.659501 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-04 01:01:30.659505 | orchestrator | Wednesday 04 February 2026 00:53:31 +0000 (0:00:00.736) 0:04:40.061 **** 2026-02-04 01:01:30.659508 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.659512 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.659516 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.659520 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.659524 | orchestrator | 2026-02-04 01:01:30.659539 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-04 01:01:30.659543 | orchestrator | Wednesday 04 February 2026 00:53:32 +0000 (0:00:01.328) 0:04:41.390 **** 2026-02-04 01:01:30.659547 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.659551 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.659555 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.659559 | orchestrator | 2026-02-04 01:01:30.659565 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-04 01:01:30.659569 | orchestrator | Wednesday 04 February 2026 00:53:32 +0000 (0:00:00.359) 0:04:41.750 **** 2026-02-04 01:01:30.659573 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.659576 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.659580 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.659587 | orchestrator | 2026-02-04 01:01:30.659591 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-04 01:01:30.659595 | orchestrator | Wednesday 04 February 2026 00:53:34 +0000 (0:00:01.748) 0:04:43.498 **** 2026-02-04 01:01:30.659599 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 01:01:30.659602 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 01:01:30.659606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 01:01:30.659610 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.659614 | orchestrator | 2026-02-04 01:01:30.659618 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-04 01:01:30.659621 | orchestrator | Wednesday 04 February 2026 00:53:35 +0000 (0:00:00.838) 0:04:44.337 **** 2026-02-04 01:01:30.659625 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.659629 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.659633 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.659637 | orchestrator | 2026-02-04 01:01:30.659641 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-04 01:01:30.659644 | orchestrator | 2026-02-04 01:01:30.659648 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 01:01:30.659652 | orchestrator | Wednesday 04 February 2026 00:53:36 +0000 (0:00:00.669) 0:04:45.007 **** 2026-02-04 01:01:30.659656 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.659660 | orchestrator | 2026-02-04 01:01:30.659664 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 01:01:30.659667 | orchestrator | Wednesday 04 February 2026 00:53:37 +0000 (0:00:00.967) 0:04:45.975 **** 2026-02-04 01:01:30.659671 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.659675 | orchestrator | 2026-02-04 01:01:30.659679 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 01:01:30.659683 | orchestrator | Wednesday 04 February 2026 00:53:37 +0000 (0:00:00.608) 0:04:46.584 **** 2026-02-04 01:01:30.659687 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.659690 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.659694 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.659698 | orchestrator | 2026-02-04 01:01:30.659702 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 01:01:30.659706 | orchestrator | Wednesday 04 February 2026 00:53:38 +0000 (0:00:00.755) 0:04:47.339 **** 2026-02-04 01:01:30.659710 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.659713 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.659717 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.659721 | orchestrator | 2026-02-04 01:01:30.659725 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 01:01:30.659729 | orchestrator | Wednesday 04 February 2026 00:53:39 +0000 (0:00:00.757) 0:04:48.097 **** 2026-02-04 01:01:30.659732 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.659736 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.659740 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.659744 | orchestrator | 2026-02-04 01:01:30.659748 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 01:01:30.659751 | orchestrator | Wednesday 04 February 2026 00:53:39 +0000 (0:00:00.412) 0:04:48.509 **** 2026-02-04 01:01:30.659755 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.659760 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.659766 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.659772 | orchestrator | 2026-02-04 01:01:30.659779 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 01:01:30.659785 | orchestrator | Wednesday 04 February 2026 00:53:40 +0000 (0:00:00.356) 0:04:48.865 **** 2026-02-04 01:01:30.659792 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.659802 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.659808 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.659813 | orchestrator | 2026-02-04 01:01:30.659819 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 01:01:30.659825 | orchestrator | Wednesday 04 February 2026 00:53:40 +0000 (0:00:00.770) 0:04:49.636 **** 2026-02-04 01:01:30.659831 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.659837 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.659843 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.659850 | orchestrator | 2026-02-04 01:01:30.659856 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 01:01:30.659863 | orchestrator | Wednesday 04 February 2026 00:53:41 +0000 (0:00:00.706) 0:04:50.342 **** 2026-02-04 01:01:30.659870 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.659877 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.659883 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.659890 | orchestrator | 2026-02-04 01:01:30.659901 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 01:01:30.659907 | orchestrator | Wednesday 04 February 2026 00:53:41 +0000 (0:00:00.361) 0:04:50.703 **** 2026-02-04 01:01:30.659911 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.659915 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.659919 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.659923 | orchestrator | 2026-02-04 01:01:30.659927 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 01:01:30.659930 | orchestrator | Wednesday 04 February 2026 00:53:42 +0000 (0:00:00.834) 0:04:51.538 **** 2026-02-04 01:01:30.659934 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.659938 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.659942 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.659946 | orchestrator | 2026-02-04 01:01:30.659949 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 01:01:30.659956 | orchestrator | Wednesday 04 February 2026 00:53:43 +0000 (0:00:00.719) 0:04:52.258 **** 2026-02-04 01:01:30.659960 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.659963 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.659967 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.659971 | orchestrator | 2026-02-04 01:01:30.659975 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 01:01:30.659978 | orchestrator | Wednesday 04 February 2026 00:53:44 +0000 (0:00:00.870) 0:04:53.128 **** 2026-02-04 01:01:30.659982 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.659986 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.659990 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.659994 | orchestrator | 2026-02-04 01:01:30.659997 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 01:01:30.660001 | orchestrator | Wednesday 04 February 2026 00:53:44 +0000 (0:00:00.447) 0:04:53.576 **** 2026-02-04 01:01:30.660005 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.660009 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.660013 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.660016 | orchestrator | 2026-02-04 01:01:30.660020 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 01:01:30.660024 | orchestrator | Wednesday 04 February 2026 00:53:45 +0000 (0:00:00.395) 0:04:53.972 **** 2026-02-04 01:01:30.660028 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.660031 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.660035 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.660039 | orchestrator | 2026-02-04 01:01:30.660043 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 01:01:30.660046 | orchestrator | Wednesday 04 February 2026 00:53:45 +0000 (0:00:00.470) 0:04:54.442 **** 2026-02-04 01:01:30.660050 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.660054 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.660061 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.660065 | orchestrator | 2026-02-04 01:01:30.660069 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 01:01:30.660072 | orchestrator | Wednesday 04 February 2026 00:53:46 +0000 (0:00:00.651) 0:04:55.094 **** 2026-02-04 01:01:30.660076 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.660080 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.660084 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.660087 | orchestrator | 2026-02-04 01:01:30.660091 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 01:01:30.660095 | orchestrator | Wednesday 04 February 2026 00:53:47 +0000 (0:00:00.871) 0:04:55.966 **** 2026-02-04 01:01:30.660099 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.660103 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.660107 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.660110 | orchestrator | 2026-02-04 01:01:30.660114 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 01:01:30.660118 | orchestrator | Wednesday 04 February 2026 00:53:47 +0000 (0:00:00.472) 0:04:56.438 **** 2026-02-04 01:01:30.660121 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660125 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.660129 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.660133 | orchestrator | 2026-02-04 01:01:30.660137 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 01:01:30.660140 | orchestrator | Wednesday 04 February 2026 00:53:48 +0000 (0:00:00.462) 0:04:56.901 **** 2026-02-04 01:01:30.660144 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660148 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.660152 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.660155 | orchestrator | 2026-02-04 01:01:30.660159 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 01:01:30.660163 | orchestrator | Wednesday 04 February 2026 00:53:48 +0000 (0:00:00.461) 0:04:57.362 **** 2026-02-04 01:01:30.660167 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660170 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.660174 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.660178 | orchestrator | 2026-02-04 01:01:30.660182 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-04 01:01:30.660185 | orchestrator | Wednesday 04 February 2026 00:53:49 +0000 (0:00:01.035) 0:04:58.398 **** 2026-02-04 01:01:30.660189 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660193 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.660197 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.660200 | orchestrator | 2026-02-04 01:01:30.660204 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-04 01:01:30.660208 | orchestrator | Wednesday 04 February 2026 00:53:50 +0000 (0:00:00.543) 0:04:58.941 **** 2026-02-04 01:01:30.660212 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.660216 | orchestrator | 2026-02-04 01:01:30.660220 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-04 01:01:30.660223 | orchestrator | Wednesday 04 February 2026 00:53:51 +0000 (0:00:01.026) 0:04:59.968 **** 2026-02-04 01:01:30.660227 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.660231 | orchestrator | 2026-02-04 01:01:30.660235 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-04 01:01:30.660241 | orchestrator | Wednesday 04 February 2026 00:53:51 +0000 (0:00:00.162) 0:05:00.130 **** 2026-02-04 01:01:30.660245 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 01:01:30.660249 | orchestrator | 2026-02-04 01:01:30.660252 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-04 01:01:30.660256 | orchestrator | Wednesday 04 February 2026 00:53:52 +0000 (0:00:01.311) 0:05:01.441 **** 2026-02-04 01:01:30.660260 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.660264 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660270 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.660274 | orchestrator | 2026-02-04 01:01:30.660278 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-04 01:01:30.660282 | orchestrator | Wednesday 04 February 2026 00:53:53 +0000 (0:00:00.606) 0:05:02.048 **** 2026-02-04 01:01:30.660286 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660289 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.660293 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.660297 | orchestrator | 2026-02-04 01:01:30.660303 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-04 01:01:30.660307 | orchestrator | Wednesday 04 February 2026 00:53:53 +0000 (0:00:00.493) 0:05:02.542 **** 2026-02-04 01:01:30.660310 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.660314 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.660318 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.660322 | orchestrator | 2026-02-04 01:01:30.660326 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-04 01:01:30.660329 | orchestrator | Wednesday 04 February 2026 00:53:55 +0000 (0:00:01.352) 0:05:03.895 **** 2026-02-04 01:01:30.660333 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.660337 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.660341 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.660345 | orchestrator | 2026-02-04 01:01:30.660348 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-04 01:01:30.660352 | orchestrator | Wednesday 04 February 2026 00:53:56 +0000 (0:00:01.502) 0:05:05.397 **** 2026-02-04 01:01:30.660356 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.660360 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.660364 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.660367 | orchestrator | 2026-02-04 01:01:30.660371 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-04 01:01:30.660375 | orchestrator | Wednesday 04 February 2026 00:53:57 +0000 (0:00:00.869) 0:05:06.267 **** 2026-02-04 01:01:30.660379 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660383 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.660386 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.660390 | orchestrator | 2026-02-04 01:01:30.660394 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-04 01:01:30.660398 | orchestrator | Wednesday 04 February 2026 00:53:58 +0000 (0:00:00.813) 0:05:07.081 **** 2026-02-04 01:01:30.660402 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.660405 | orchestrator | 2026-02-04 01:01:30.660409 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-04 01:01:30.660413 | orchestrator | Wednesday 04 February 2026 00:53:59 +0000 (0:00:01.200) 0:05:08.282 **** 2026-02-04 01:01:30.660417 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660421 | orchestrator | 2026-02-04 01:01:30.660425 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-04 01:01:30.660428 | orchestrator | Wednesday 04 February 2026 00:54:00 +0000 (0:00:01.162) 0:05:09.444 **** 2026-02-04 01:01:30.660432 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:01:30.660436 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.660440 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.660444 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 01:01:30.660447 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 01:01:30.660451 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-04 01:01:30.660455 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 01:01:30.660459 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-04 01:01:30.660463 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-04 01:01:30.660466 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-04 01:01:30.660475 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 01:01:30.660479 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-04 01:01:30.660483 | orchestrator | 2026-02-04 01:01:30.660486 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-04 01:01:30.660490 | orchestrator | Wednesday 04 February 2026 00:54:05 +0000 (0:00:04.794) 0:05:14.239 **** 2026-02-04 01:01:30.660494 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.660498 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.660502 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.660505 | orchestrator | 2026-02-04 01:01:30.660509 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-04 01:01:30.660513 | orchestrator | Wednesday 04 February 2026 00:54:07 +0000 (0:00:01.993) 0:05:16.235 **** 2026-02-04 01:01:30.660517 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660521 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.660524 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.660539 | orchestrator | 2026-02-04 01:01:30.660546 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-04 01:01:30.660552 | orchestrator | Wednesday 04 February 2026 00:54:07 +0000 (0:00:00.467) 0:05:16.702 **** 2026-02-04 01:01:30.660557 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660561 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.660565 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.660569 | orchestrator | 2026-02-04 01:01:30.660573 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-04 01:01:30.660577 | orchestrator | Wednesday 04 February 2026 00:54:09 +0000 (0:00:01.296) 0:05:17.999 **** 2026-02-04 01:01:30.660581 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.660585 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.660589 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.660593 | orchestrator | 2026-02-04 01:01:30.660599 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-04 01:01:30.660603 | orchestrator | Wednesday 04 February 2026 00:54:12 +0000 (0:00:03.529) 0:05:21.528 **** 2026-02-04 01:01:30.660607 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.660611 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.660615 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.660618 | orchestrator | 2026-02-04 01:01:30.660622 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-04 01:01:30.660626 | orchestrator | Wednesday 04 February 2026 00:54:14 +0000 (0:00:01.271) 0:05:22.800 **** 2026-02-04 01:01:30.660630 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.660634 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.660637 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.660641 | orchestrator | 2026-02-04 01:01:30.660653 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-04 01:01:30.660657 | orchestrator | Wednesday 04 February 2026 00:54:14 +0000 (0:00:00.356) 0:05:23.157 **** 2026-02-04 01:01:30.660661 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.660665 | orchestrator | 2026-02-04 01:01:30.660669 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-04 01:01:30.660673 | orchestrator | Wednesday 04 February 2026 00:54:14 +0000 (0:00:00.584) 0:05:23.741 **** 2026-02-04 01:01:30.660677 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.660680 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.660684 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.660688 | orchestrator | 2026-02-04 01:01:30.660692 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-04 01:01:30.660696 | orchestrator | Wednesday 04 February 2026 00:54:15 +0000 (0:00:00.637) 0:05:24.379 **** 2026-02-04 01:01:30.660700 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.660704 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.660735 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.660739 | orchestrator | 2026-02-04 01:01:30.660743 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-04 01:01:30.660747 | orchestrator | Wednesday 04 February 2026 00:54:16 +0000 (0:00:00.403) 0:05:24.782 **** 2026-02-04 01:01:30.660751 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.660755 | orchestrator | 2026-02-04 01:01:30.660759 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-04 01:01:30.660762 | orchestrator | Wednesday 04 February 2026 00:54:16 +0000 (0:00:00.843) 0:05:25.626 **** 2026-02-04 01:01:30.660766 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.660770 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.660774 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.660778 | orchestrator | 2026-02-04 01:01:30.660782 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-04 01:01:30.660785 | orchestrator | Wednesday 04 February 2026 00:54:21 +0000 (0:00:04.474) 0:05:30.100 **** 2026-02-04 01:01:30.660789 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.660793 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.660797 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.660801 | orchestrator | 2026-02-04 01:01:30.660805 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-04 01:01:30.660809 | orchestrator | Wednesday 04 February 2026 00:54:22 +0000 (0:00:01.425) 0:05:31.526 **** 2026-02-04 01:01:30.660813 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.660816 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.660820 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.660824 | orchestrator | 2026-02-04 01:01:30.660828 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-04 01:01:30.660832 | orchestrator | Wednesday 04 February 2026 00:54:25 +0000 (0:00:02.267) 0:05:33.793 **** 2026-02-04 01:01:30.660836 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.660840 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.660843 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.660847 | orchestrator | 2026-02-04 01:01:30.660851 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-04 01:01:30.660855 | orchestrator | Wednesday 04 February 2026 00:54:27 +0000 (0:00:02.405) 0:05:36.199 **** 2026-02-04 01:01:30.660859 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.660863 | orchestrator | 2026-02-04 01:01:30.660866 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-04 01:01:30.660870 | orchestrator | Wednesday 04 February 2026 00:54:29 +0000 (0:00:01.678) 0:05:37.877 **** 2026-02-04 01:01:30.660874 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660878 | orchestrator | 2026-02-04 01:01:30.660882 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-04 01:01:30.660886 | orchestrator | Wednesday 04 February 2026 00:54:30 +0000 (0:00:01.199) 0:05:39.077 **** 2026-02-04 01:01:30.660890 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.660893 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.660897 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.660901 | orchestrator | 2026-02-04 01:01:30.660905 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-04 01:01:30.660909 | orchestrator | Wednesday 04 February 2026 00:54:40 +0000 (0:00:09.961) 0:05:49.038 **** 2026-02-04 01:01:30.660913 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.660917 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.660920 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.660924 | orchestrator | 2026-02-04 01:01:30.660928 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-04 01:01:30.660932 | orchestrator | Wednesday 04 February 2026 00:54:40 +0000 (0:00:00.356) 0:05:49.395 **** 2026-02-04 01:01:30.661072 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b11015967ad5fd841968af693f909fc9e4f9b8eb'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-04 01:01:30.661081 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b11015967ad5fd841968af693f909fc9e4f9b8eb'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-04 01:01:30.661088 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b11015967ad5fd841968af693f909fc9e4f9b8eb'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-04 01:01:30.661093 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b11015967ad5fd841968af693f909fc9e4f9b8eb'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-04 01:01:30.661097 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b11015967ad5fd841968af693f909fc9e4f9b8eb'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-04 01:01:30.661102 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b11015967ad5fd841968af693f909fc9e4f9b8eb'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b11015967ad5fd841968af693f909fc9e4f9b8eb'}])  2026-02-04 01:01:30.661106 | orchestrator | 2026-02-04 01:01:30.661110 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 01:01:30.661114 | orchestrator | Wednesday 04 February 2026 00:54:56 +0000 (0:00:15.465) 0:06:04.860 **** 2026-02-04 01:01:30.661118 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661122 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661126 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661129 | orchestrator | 2026-02-04 01:01:30.661133 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-04 01:01:30.661137 | orchestrator | Wednesday 04 February 2026 00:54:56 +0000 (0:00:00.417) 0:06:05.278 **** 2026-02-04 01:01:30.661141 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.661145 | orchestrator | 2026-02-04 01:01:30.661149 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-04 01:01:30.661152 | orchestrator | Wednesday 04 February 2026 00:54:57 +0000 (0:00:00.940) 0:06:06.218 **** 2026-02-04 01:01:30.661159 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.661165 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.661171 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.661177 | orchestrator | 2026-02-04 01:01:30.661183 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-04 01:01:30.661190 | orchestrator | Wednesday 04 February 2026 00:54:57 +0000 (0:00:00.430) 0:06:06.649 **** 2026-02-04 01:01:30.661195 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661206 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661212 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661218 | orchestrator | 2026-02-04 01:01:30.661224 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-04 01:01:30.661231 | orchestrator | Wednesday 04 February 2026 00:54:58 +0000 (0:00:00.431) 0:06:07.080 **** 2026-02-04 01:01:30.661237 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 01:01:30.661243 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 01:01:30.661250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 01:01:30.661256 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661263 | orchestrator | 2026-02-04 01:01:30.661267 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-04 01:01:30.661271 | orchestrator | Wednesday 04 February 2026 00:54:59 +0000 (0:00:01.123) 0:06:08.204 **** 2026-02-04 01:01:30.661275 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.661278 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.661282 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.661286 | orchestrator | 2026-02-04 01:01:30.661290 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-04 01:01:30.661294 | orchestrator | 2026-02-04 01:01:30.661298 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 01:01:30.661320 | orchestrator | Wednesday 04 February 2026 00:55:00 +0000 (0:00:01.191) 0:06:09.395 **** 2026-02-04 01:01:30.661326 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.661332 | orchestrator | 2026-02-04 01:01:30.661338 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 01:01:30.661344 | orchestrator | Wednesday 04 February 2026 00:55:01 +0000 (0:00:00.623) 0:06:10.019 **** 2026-02-04 01:01:30.661349 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.661356 | orchestrator | 2026-02-04 01:01:30.661362 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 01:01:30.661372 | orchestrator | Wednesday 04 February 2026 00:55:02 +0000 (0:00:01.018) 0:06:11.037 **** 2026-02-04 01:01:30.661378 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.661382 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.661385 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.661389 | orchestrator | 2026-02-04 01:01:30.661393 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 01:01:30.661397 | orchestrator | Wednesday 04 February 2026 00:55:03 +0000 (0:00:00.821) 0:06:11.859 **** 2026-02-04 01:01:30.661401 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661404 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661408 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661413 | orchestrator | 2026-02-04 01:01:30.661419 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 01:01:30.661425 | orchestrator | Wednesday 04 February 2026 00:55:03 +0000 (0:00:00.371) 0:06:12.231 **** 2026-02-04 01:01:30.661432 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661438 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661444 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661450 | orchestrator | 2026-02-04 01:01:30.661456 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 01:01:30.661462 | orchestrator | Wednesday 04 February 2026 00:55:03 +0000 (0:00:00.383) 0:06:12.614 **** 2026-02-04 01:01:30.661469 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661475 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661482 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661488 | orchestrator | 2026-02-04 01:01:30.661495 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 01:01:30.661504 | orchestrator | Wednesday 04 February 2026 00:55:04 +0000 (0:00:00.748) 0:06:13.363 **** 2026-02-04 01:01:30.661507 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.661512 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.661515 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.661519 | orchestrator | 2026-02-04 01:01:30.661523 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 01:01:30.661527 | orchestrator | Wednesday 04 February 2026 00:55:05 +0000 (0:00:00.816) 0:06:14.180 **** 2026-02-04 01:01:30.661560 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661564 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661568 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661571 | orchestrator | 2026-02-04 01:01:30.661575 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 01:01:30.661579 | orchestrator | Wednesday 04 February 2026 00:55:05 +0000 (0:00:00.364) 0:06:14.544 **** 2026-02-04 01:01:30.661583 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661587 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661590 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661594 | orchestrator | 2026-02-04 01:01:30.661598 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 01:01:30.661602 | orchestrator | Wednesday 04 February 2026 00:55:06 +0000 (0:00:00.366) 0:06:14.911 **** 2026-02-04 01:01:30.661605 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.661609 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.661613 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.661617 | orchestrator | 2026-02-04 01:01:30.661621 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 01:01:30.661624 | orchestrator | Wednesday 04 February 2026 00:55:07 +0000 (0:00:01.287) 0:06:16.198 **** 2026-02-04 01:01:30.661628 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.661632 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.661636 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.661639 | orchestrator | 2026-02-04 01:01:30.661643 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 01:01:30.661647 | orchestrator | Wednesday 04 February 2026 00:55:08 +0000 (0:00:00.831) 0:06:17.030 **** 2026-02-04 01:01:30.661651 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661655 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661659 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661663 | orchestrator | 2026-02-04 01:01:30.661666 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 01:01:30.661670 | orchestrator | Wednesday 04 February 2026 00:55:08 +0000 (0:00:00.393) 0:06:17.424 **** 2026-02-04 01:01:30.661674 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.661678 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.661681 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.661685 | orchestrator | 2026-02-04 01:01:30.661689 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 01:01:30.661693 | orchestrator | Wednesday 04 February 2026 00:55:09 +0000 (0:00:00.395) 0:06:17.819 **** 2026-02-04 01:01:30.661697 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661700 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661704 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661708 | orchestrator | 2026-02-04 01:01:30.661712 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 01:01:30.661716 | orchestrator | Wednesday 04 February 2026 00:55:09 +0000 (0:00:00.386) 0:06:18.206 **** 2026-02-04 01:01:30.661719 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661723 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661727 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661731 | orchestrator | 2026-02-04 01:01:30.661735 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 01:01:30.661756 | orchestrator | Wednesday 04 February 2026 00:55:10 +0000 (0:00:00.862) 0:06:19.069 **** 2026-02-04 01:01:30.661765 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661769 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661773 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661777 | orchestrator | 2026-02-04 01:01:30.661781 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 01:01:30.661785 | orchestrator | Wednesday 04 February 2026 00:55:10 +0000 (0:00:00.375) 0:06:19.444 **** 2026-02-04 01:01:30.661789 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661793 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661796 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661800 | orchestrator | 2026-02-04 01:01:30.661804 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 01:01:30.661831 | orchestrator | Wednesday 04 February 2026 00:55:11 +0000 (0:00:00.408) 0:06:19.853 **** 2026-02-04 01:01:30.661836 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661840 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661843 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.661847 | orchestrator | 2026-02-04 01:01:30.661851 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 01:01:30.661855 | orchestrator | Wednesday 04 February 2026 00:55:11 +0000 (0:00:00.339) 0:06:20.193 **** 2026-02-04 01:01:30.661858 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.661862 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.661866 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.661870 | orchestrator | 2026-02-04 01:01:30.661874 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 01:01:30.661877 | orchestrator | Wednesday 04 February 2026 00:55:12 +0000 (0:00:00.834) 0:06:21.027 **** 2026-02-04 01:01:30.661881 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.661885 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.661889 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.661892 | orchestrator | 2026-02-04 01:01:30.661896 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 01:01:30.661900 | orchestrator | Wednesday 04 February 2026 00:55:12 +0000 (0:00:00.437) 0:06:21.465 **** 2026-02-04 01:01:30.661904 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.661908 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.661911 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.661915 | orchestrator | 2026-02-04 01:01:30.661919 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-04 01:01:30.661923 | orchestrator | Wednesday 04 February 2026 00:55:13 +0000 (0:00:00.664) 0:06:22.130 **** 2026-02-04 01:01:30.661927 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 01:01:30.661931 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 01:01:30.661935 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 01:01:30.661938 | orchestrator | 2026-02-04 01:01:30.661942 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-04 01:01:30.661946 | orchestrator | Wednesday 04 February 2026 00:55:14 +0000 (0:00:01.372) 0:06:23.503 **** 2026-02-04 01:01:30.661950 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.661954 | orchestrator | 2026-02-04 01:01:30.661958 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-04 01:01:30.661961 | orchestrator | Wednesday 04 February 2026 00:55:15 +0000 (0:00:00.956) 0:06:24.459 **** 2026-02-04 01:01:30.661965 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.661969 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.661973 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.661976 | orchestrator | 2026-02-04 01:01:30.661980 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-04 01:01:30.661984 | orchestrator | Wednesday 04 February 2026 00:55:16 +0000 (0:00:00.777) 0:06:25.236 **** 2026-02-04 01:01:30.661988 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.661995 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.661998 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.662002 | orchestrator | 2026-02-04 01:01:30.662006 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-04 01:01:30.662010 | orchestrator | Wednesday 04 February 2026 00:55:16 +0000 (0:00:00.418) 0:06:25.654 **** 2026-02-04 01:01:30.662032 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:01:30.662036 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:01:30.662040 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:01:30.662043 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-04 01:01:30.662047 | orchestrator | 2026-02-04 01:01:30.662051 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-04 01:01:30.662055 | orchestrator | Wednesday 04 February 2026 00:55:27 +0000 (0:00:10.678) 0:06:36.333 **** 2026-02-04 01:01:30.662059 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.662062 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.662066 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.662070 | orchestrator | 2026-02-04 01:01:30.662074 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-04 01:01:30.662078 | orchestrator | Wednesday 04 February 2026 00:55:28 +0000 (0:00:00.744) 0:06:37.078 **** 2026-02-04 01:01:30.662081 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-04 01:01:30.662085 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 01:01:30.662089 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 01:01:30.662093 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-04 01:01:30.662096 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.662100 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.662103 | orchestrator | 2026-02-04 01:01:30.662107 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-04 01:01:30.662110 | orchestrator | Wednesday 04 February 2026 00:55:30 +0000 (0:00:02.399) 0:06:39.477 **** 2026-02-04 01:01:30.662125 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-04 01:01:30.662129 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 01:01:30.662133 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 01:01:30.662137 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:01:30.662140 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-04 01:01:30.662144 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-04 01:01:30.662147 | orchestrator | 2026-02-04 01:01:30.662151 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-04 01:01:30.662154 | orchestrator | Wednesday 04 February 2026 00:55:32 +0000 (0:00:01.335) 0:06:40.813 **** 2026-02-04 01:01:30.662158 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.662161 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.662165 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.662168 | orchestrator | 2026-02-04 01:01:30.662174 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-04 01:01:30.662177 | orchestrator | Wednesday 04 February 2026 00:55:32 +0000 (0:00:00.803) 0:06:41.616 **** 2026-02-04 01:01:30.662181 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.662184 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.662188 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.662191 | orchestrator | 2026-02-04 01:01:30.662195 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-04 01:01:30.662198 | orchestrator | Wednesday 04 February 2026 00:55:33 +0000 (0:00:00.718) 0:06:42.334 **** 2026-02-04 01:01:30.662202 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.662205 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.662209 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.662212 | orchestrator | 2026-02-04 01:01:30.662221 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-04 01:01:30.662225 | orchestrator | Wednesday 04 February 2026 00:55:33 +0000 (0:00:00.350) 0:06:42.684 **** 2026-02-04 01:01:30.662228 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.662232 | orchestrator | 2026-02-04 01:01:30.662235 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-04 01:01:30.662239 | orchestrator | Wednesday 04 February 2026 00:55:34 +0000 (0:00:00.615) 0:06:43.299 **** 2026-02-04 01:01:30.662242 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.662246 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.662249 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.662253 | orchestrator | 2026-02-04 01:01:30.662256 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-04 01:01:30.662260 | orchestrator | Wednesday 04 February 2026 00:55:35 +0000 (0:00:00.693) 0:06:43.993 **** 2026-02-04 01:01:30.662264 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.662267 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.662271 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.662274 | orchestrator | 2026-02-04 01:01:30.662278 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-04 01:01:30.662281 | orchestrator | Wednesday 04 February 2026 00:55:35 +0000 (0:00:00.463) 0:06:44.457 **** 2026-02-04 01:01:30.662285 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.662288 | orchestrator | 2026-02-04 01:01:30.662292 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-04 01:01:30.662295 | orchestrator | Wednesday 04 February 2026 00:55:36 +0000 (0:00:00.596) 0:06:45.053 **** 2026-02-04 01:01:30.662299 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.662303 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.662306 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.662310 | orchestrator | 2026-02-04 01:01:30.662313 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-04 01:01:30.662317 | orchestrator | Wednesday 04 February 2026 00:55:37 +0000 (0:00:01.707) 0:06:46.760 **** 2026-02-04 01:01:30.662320 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.662324 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.662327 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.662331 | orchestrator | 2026-02-04 01:01:30.662335 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-04 01:01:30.662341 | orchestrator | Wednesday 04 February 2026 00:55:39 +0000 (0:00:01.299) 0:06:48.060 **** 2026-02-04 01:01:30.662346 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.662353 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.662359 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.662364 | orchestrator | 2026-02-04 01:01:30.662370 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-04 01:01:30.662375 | orchestrator | Wednesday 04 February 2026 00:55:41 +0000 (0:00:02.026) 0:06:50.087 **** 2026-02-04 01:01:30.662381 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.662387 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.662393 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.662399 | orchestrator | 2026-02-04 01:01:30.662405 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-04 01:01:30.662411 | orchestrator | Wednesday 04 February 2026 00:55:43 +0000 (0:00:02.299) 0:06:52.386 **** 2026-02-04 01:01:30.662417 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.662423 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.662429 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-04 01:01:30.662435 | orchestrator | 2026-02-04 01:01:30.662442 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-04 01:01:30.662452 | orchestrator | Wednesday 04 February 2026 00:55:44 +0000 (0:00:00.972) 0:06:53.359 **** 2026-02-04 01:01:30.662459 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-04 01:01:30.662465 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-04 01:01:30.662485 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-04 01:01:30.662489 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-04 01:01:30.662493 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-04 01:01:30.662496 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-02-04 01:01:30.662500 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:01:30.662503 | orchestrator | 2026-02-04 01:01:30.662507 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-04 01:01:30.662512 | orchestrator | Wednesday 04 February 2026 00:56:21 +0000 (0:00:36.590) 0:07:29.950 **** 2026-02-04 01:01:30.662516 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:01:30.662520 | orchestrator | 2026-02-04 01:01:30.662523 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-04 01:01:30.662526 | orchestrator | Wednesday 04 February 2026 00:56:22 +0000 (0:00:01.331) 0:07:31.281 **** 2026-02-04 01:01:30.662541 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.662545 | orchestrator | 2026-02-04 01:01:30.662549 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-04 01:01:30.662552 | orchestrator | Wednesday 04 February 2026 00:56:22 +0000 (0:00:00.372) 0:07:31.654 **** 2026-02-04 01:01:30.662555 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.662559 | orchestrator | 2026-02-04 01:01:30.662562 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-04 01:01:30.662566 | orchestrator | Wednesday 04 February 2026 00:56:23 +0000 (0:00:00.224) 0:07:31.878 **** 2026-02-04 01:01:30.662569 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-04 01:01:30.662573 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-04 01:01:30.662576 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-04 01:01:30.662580 | orchestrator | 2026-02-04 01:01:30.662583 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-04 01:01:30.662587 | orchestrator | Wednesday 04 February 2026 00:56:29 +0000 (0:00:06.484) 0:07:38.363 **** 2026-02-04 01:01:30.662590 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-04 01:01:30.662593 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-04 01:01:30.662597 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-04 01:01:30.662600 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-04 01:01:30.662604 | orchestrator | 2026-02-04 01:01:30.662607 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 01:01:30.662611 | orchestrator | Wednesday 04 February 2026 00:56:35 +0000 (0:00:05.493) 0:07:43.857 **** 2026-02-04 01:01:30.662614 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.662618 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.662621 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.662625 | orchestrator | 2026-02-04 01:01:30.662628 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-04 01:01:30.662631 | orchestrator | Wednesday 04 February 2026 00:56:35 +0000 (0:00:00.730) 0:07:44.587 **** 2026-02-04 01:01:30.662635 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:30.662641 | orchestrator | 2026-02-04 01:01:30.662647 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-04 01:01:30.662653 | orchestrator | Wednesday 04 February 2026 00:56:36 +0000 (0:00:00.598) 0:07:45.186 **** 2026-02-04 01:01:30.662663 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.662670 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.662675 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.662681 | orchestrator | 2026-02-04 01:01:30.662687 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-04 01:01:30.662692 | orchestrator | Wednesday 04 February 2026 00:56:36 +0000 (0:00:00.355) 0:07:45.541 **** 2026-02-04 01:01:30.662697 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.662703 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.662708 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.662713 | orchestrator | 2026-02-04 01:01:30.662718 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-04 01:01:30.662724 | orchestrator | Wednesday 04 February 2026 00:56:38 +0000 (0:00:01.711) 0:07:47.253 **** 2026-02-04 01:01:30.662730 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 01:01:30.662736 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 01:01:30.662742 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 01:01:30.662748 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.662754 | orchestrator | 2026-02-04 01:01:30.662760 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-04 01:01:30.662766 | orchestrator | Wednesday 04 February 2026 00:56:39 +0000 (0:00:00.727) 0:07:47.980 **** 2026-02-04 01:01:30.662771 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.662776 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.662780 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.662783 | orchestrator | 2026-02-04 01:01:30.662787 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-04 01:01:30.662790 | orchestrator | 2026-02-04 01:01:30.662794 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 01:01:30.662797 | orchestrator | Wednesday 04 February 2026 00:56:39 +0000 (0:00:00.615) 0:07:48.596 **** 2026-02-04 01:01:30.662801 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.662805 | orchestrator | 2026-02-04 01:01:30.662827 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 01:01:30.662831 | orchestrator | Wednesday 04 February 2026 00:56:40 +0000 (0:00:00.990) 0:07:49.587 **** 2026-02-04 01:01:30.662836 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.662845 | orchestrator | 2026-02-04 01:01:30.662852 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 01:01:30.662858 | orchestrator | Wednesday 04 February 2026 00:56:41 +0000 (0:00:00.621) 0:07:50.209 **** 2026-02-04 01:01:30.662864 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.662869 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.662875 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.662882 | orchestrator | 2026-02-04 01:01:30.662892 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 01:01:30.662898 | orchestrator | Wednesday 04 February 2026 00:56:42 +0000 (0:00:00.648) 0:07:50.857 **** 2026-02-04 01:01:30.662904 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.662908 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.662911 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.662915 | orchestrator | 2026-02-04 01:01:30.662918 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 01:01:30.662922 | orchestrator | Wednesday 04 February 2026 00:56:42 +0000 (0:00:00.740) 0:07:51.597 **** 2026-02-04 01:01:30.662925 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.662934 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.662938 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.662941 | orchestrator | 2026-02-04 01:01:30.662945 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 01:01:30.662948 | orchestrator | Wednesday 04 February 2026 00:56:43 +0000 (0:00:00.765) 0:07:52.363 **** 2026-02-04 01:01:30.662952 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.662955 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.662959 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.662962 | orchestrator | 2026-02-04 01:01:30.662965 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 01:01:30.662969 | orchestrator | Wednesday 04 February 2026 00:56:44 +0000 (0:00:00.728) 0:07:53.092 **** 2026-02-04 01:01:30.662972 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.662976 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.662979 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.662983 | orchestrator | 2026-02-04 01:01:30.662986 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 01:01:30.662990 | orchestrator | Wednesday 04 February 2026 00:56:45 +0000 (0:00:00.769) 0:07:53.862 **** 2026-02-04 01:01:30.662993 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.662997 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663000 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663004 | orchestrator | 2026-02-04 01:01:30.663007 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 01:01:30.663010 | orchestrator | Wednesday 04 February 2026 00:56:45 +0000 (0:00:00.408) 0:07:54.271 **** 2026-02-04 01:01:30.663014 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663017 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663021 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663024 | orchestrator | 2026-02-04 01:01:30.663028 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 01:01:30.663031 | orchestrator | Wednesday 04 February 2026 00:56:45 +0000 (0:00:00.428) 0:07:54.700 **** 2026-02-04 01:01:30.663035 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663038 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663042 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663045 | orchestrator | 2026-02-04 01:01:30.663049 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 01:01:30.663052 | orchestrator | Wednesday 04 February 2026 00:56:46 +0000 (0:00:00.695) 0:07:55.395 **** 2026-02-04 01:01:30.663055 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663059 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663062 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663066 | orchestrator | 2026-02-04 01:01:30.663069 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 01:01:30.663073 | orchestrator | Wednesday 04 February 2026 00:56:47 +0000 (0:00:01.034) 0:07:56.429 **** 2026-02-04 01:01:30.663077 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663080 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663083 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663087 | orchestrator | 2026-02-04 01:01:30.663090 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 01:01:30.663094 | orchestrator | Wednesday 04 February 2026 00:56:48 +0000 (0:00:00.385) 0:07:56.814 **** 2026-02-04 01:01:30.663097 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663101 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663104 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663108 | orchestrator | 2026-02-04 01:01:30.663111 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 01:01:30.663115 | orchestrator | Wednesday 04 February 2026 00:56:48 +0000 (0:00:00.321) 0:07:57.136 **** 2026-02-04 01:01:30.663118 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663122 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663125 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663131 | orchestrator | 2026-02-04 01:01:30.663134 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 01:01:30.663138 | orchestrator | Wednesday 04 February 2026 00:56:48 +0000 (0:00:00.388) 0:07:57.525 **** 2026-02-04 01:01:30.663141 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663145 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663148 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663152 | orchestrator | 2026-02-04 01:01:30.663155 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 01:01:30.663159 | orchestrator | Wednesday 04 February 2026 00:56:49 +0000 (0:00:00.706) 0:07:58.232 **** 2026-02-04 01:01:30.663162 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663166 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663169 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663173 | orchestrator | 2026-02-04 01:01:30.663179 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 01:01:30.663183 | orchestrator | Wednesday 04 February 2026 00:56:49 +0000 (0:00:00.406) 0:07:58.638 **** 2026-02-04 01:01:30.663186 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663190 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663193 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663197 | orchestrator | 2026-02-04 01:01:30.663200 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 01:01:30.663204 | orchestrator | Wednesday 04 February 2026 00:56:50 +0000 (0:00:00.336) 0:07:58.974 **** 2026-02-04 01:01:30.663207 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663211 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663214 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663218 | orchestrator | 2026-02-04 01:01:30.663221 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 01:01:30.663227 | orchestrator | Wednesday 04 February 2026 00:56:50 +0000 (0:00:00.350) 0:07:59.325 **** 2026-02-04 01:01:30.663230 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663234 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663237 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663241 | orchestrator | 2026-02-04 01:01:30.663244 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 01:01:30.663248 | orchestrator | Wednesday 04 February 2026 00:56:51 +0000 (0:00:00.645) 0:07:59.970 **** 2026-02-04 01:01:30.663251 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663256 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663262 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663267 | orchestrator | 2026-02-04 01:01:30.663273 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 01:01:30.663278 | orchestrator | Wednesday 04 February 2026 00:56:51 +0000 (0:00:00.389) 0:08:00.360 **** 2026-02-04 01:01:30.663284 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663290 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663296 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663301 | orchestrator | 2026-02-04 01:01:30.663306 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-04 01:01:30.663312 | orchestrator | Wednesday 04 February 2026 00:56:52 +0000 (0:00:00.629) 0:08:00.989 **** 2026-02-04 01:01:30.663317 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663322 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663327 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663332 | orchestrator | 2026-02-04 01:01:30.663338 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-04 01:01:30.663344 | orchestrator | Wednesday 04 February 2026 00:56:52 +0000 (0:00:00.670) 0:08:01.659 **** 2026-02-04 01:01:30.663349 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 01:01:30.663355 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 01:01:30.663360 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 01:01:30.663370 | orchestrator | 2026-02-04 01:01:30.663377 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-04 01:01:30.663382 | orchestrator | Wednesday 04 February 2026 00:56:53 +0000 (0:00:00.879) 0:08:02.538 **** 2026-02-04 01:01:30.663388 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.663392 | orchestrator | 2026-02-04 01:01:30.663395 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-04 01:01:30.663399 | orchestrator | Wednesday 04 February 2026 00:56:54 +0000 (0:00:00.580) 0:08:03.119 **** 2026-02-04 01:01:30.663402 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663406 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663409 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663413 | orchestrator | 2026-02-04 01:01:30.663416 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-04 01:01:30.663419 | orchestrator | Wednesday 04 February 2026 00:56:54 +0000 (0:00:00.344) 0:08:03.464 **** 2026-02-04 01:01:30.663423 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663426 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663430 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663433 | orchestrator | 2026-02-04 01:01:30.663437 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-04 01:01:30.663440 | orchestrator | Wednesday 04 February 2026 00:56:55 +0000 (0:00:00.680) 0:08:04.144 **** 2026-02-04 01:01:30.663444 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663447 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663451 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663454 | orchestrator | 2026-02-04 01:01:30.663458 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-04 01:01:30.663461 | orchestrator | Wednesday 04 February 2026 00:56:56 +0000 (0:00:00.758) 0:08:04.902 **** 2026-02-04 01:01:30.663465 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663468 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663472 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663475 | orchestrator | 2026-02-04 01:01:30.663479 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-04 01:01:30.663482 | orchestrator | Wednesday 04 February 2026 00:56:56 +0000 (0:00:00.381) 0:08:05.284 **** 2026-02-04 01:01:30.663486 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-04 01:01:30.663489 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-04 01:01:30.663493 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-04 01:01:30.663496 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-04 01:01:30.663500 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-04 01:01:30.663503 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-04 01:01:30.663511 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-04 01:01:30.663515 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-04 01:01:30.663518 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-04 01:01:30.663522 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-04 01:01:30.663526 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-04 01:01:30.663542 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-04 01:01:30.663548 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-04 01:01:30.663551 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-04 01:01:30.663559 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-04 01:01:30.663565 | orchestrator | 2026-02-04 01:01:30.663571 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-04 01:01:30.663577 | orchestrator | Wednesday 04 February 2026 00:56:59 +0000 (0:00:03.370) 0:08:08.654 **** 2026-02-04 01:01:30.663582 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663588 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663593 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663599 | orchestrator | 2026-02-04 01:01:30.663605 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-04 01:01:30.663612 | orchestrator | Wednesday 04 February 2026 00:57:00 +0000 (0:00:00.708) 0:08:09.363 **** 2026-02-04 01:01:30.663617 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.663624 | orchestrator | 2026-02-04 01:01:30.663629 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-04 01:01:30.663634 | orchestrator | Wednesday 04 February 2026 00:57:01 +0000 (0:00:00.654) 0:08:10.017 **** 2026-02-04 01:01:30.663638 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-04 01:01:30.663641 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-04 01:01:30.663645 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-04 01:01:30.663648 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-04 01:01:30.663652 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-04 01:01:30.663655 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-04 01:01:30.663659 | orchestrator | 2026-02-04 01:01:30.663662 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-04 01:01:30.663666 | orchestrator | Wednesday 04 February 2026 00:57:02 +0000 (0:00:01.201) 0:08:11.218 **** 2026-02-04 01:01:30.663669 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.663673 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 01:01:30.663676 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 01:01:30.663679 | orchestrator | 2026-02-04 01:01:30.663683 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-04 01:01:30.663686 | orchestrator | Wednesday 04 February 2026 00:57:04 +0000 (0:00:02.299) 0:08:13.518 **** 2026-02-04 01:01:30.663690 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 01:01:30.663693 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 01:01:30.663697 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.663700 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 01:01:30.663704 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-04 01:01:30.663707 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.663711 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 01:01:30.663714 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-04 01:01:30.663718 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.663721 | orchestrator | 2026-02-04 01:01:30.663725 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-04 01:01:30.663728 | orchestrator | Wednesday 04 February 2026 00:57:06 +0000 (0:00:01.760) 0:08:15.279 **** 2026-02-04 01:01:30.663731 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:01:30.663735 | orchestrator | 2026-02-04 01:01:30.663738 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-04 01:01:30.663742 | orchestrator | Wednesday 04 February 2026 00:57:08 +0000 (0:00:02.343) 0:08:17.623 **** 2026-02-04 01:01:30.663745 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.663752 | orchestrator | 2026-02-04 01:01:30.663756 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-04 01:01:30.663759 | orchestrator | Wednesday 04 February 2026 00:57:09 +0000 (0:00:00.562) 0:08:18.185 **** 2026-02-04 01:01:30.663763 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6cd3944c-50dd-590e-9699-94e09e9b1959', 'data_vg': 'ceph-6cd3944c-50dd-590e-9699-94e09e9b1959'}) 2026-02-04 01:01:30.663767 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e3daecb5-9fd0-5834-b191-078d341d10dc', 'data_vg': 'ceph-e3daecb5-9fd0-5834-b191-078d341d10dc'}) 2026-02-04 01:01:30.663771 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cab1220b-9ff6-5009-b197-fa753e4036d2', 'data_vg': 'ceph-cab1220b-9ff6-5009-b197-fa753e4036d2'}) 2026-02-04 01:01:30.663777 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-607d890d-3e41-57a1-9874-83b389fa50fb', 'data_vg': 'ceph-607d890d-3e41-57a1-9874-83b389fa50fb'}) 2026-02-04 01:01:30.663780 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4adee4b4-d62b-5502-a742-8ac6c3138b01', 'data_vg': 'ceph-4adee4b4-d62b-5502-a742-8ac6c3138b01'}) 2026-02-04 01:01:30.663784 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-197bc0b1-bda8-5def-b850-786176b935dd', 'data_vg': 'ceph-197bc0b1-bda8-5def-b850-786176b935dd'}) 2026-02-04 01:01:30.663787 | orchestrator | 2026-02-04 01:01:30.663791 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-04 01:01:30.663794 | orchestrator | Wednesday 04 February 2026 00:57:49 +0000 (0:00:40.448) 0:08:58.634 **** 2026-02-04 01:01:30.663800 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663804 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663807 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663811 | orchestrator | 2026-02-04 01:01:30.663814 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-04 01:01:30.663818 | orchestrator | Wednesday 04 February 2026 00:57:50 +0000 (0:00:00.871) 0:08:59.505 **** 2026-02-04 01:01:30.663821 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.663825 | orchestrator | 2026-02-04 01:01:30.663828 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-04 01:01:30.663832 | orchestrator | Wednesday 04 February 2026 00:57:51 +0000 (0:00:00.596) 0:09:00.102 **** 2026-02-04 01:01:30.663835 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663839 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663842 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663846 | orchestrator | 2026-02-04 01:01:30.663849 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-04 01:01:30.663853 | orchestrator | Wednesday 04 February 2026 00:57:51 +0000 (0:00:00.658) 0:09:00.761 **** 2026-02-04 01:01:30.663856 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.663860 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.663863 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.663867 | orchestrator | 2026-02-04 01:01:30.663870 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-04 01:01:30.663874 | orchestrator | Wednesday 04 February 2026 00:57:55 +0000 (0:00:03.480) 0:09:04.241 **** 2026-02-04 01:01:30.663877 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.663881 | orchestrator | 2026-02-04 01:01:30.663884 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-04 01:01:30.663888 | orchestrator | Wednesday 04 February 2026 00:57:56 +0000 (0:00:00.650) 0:09:04.892 **** 2026-02-04 01:01:30.663891 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.663895 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.663898 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.663902 | orchestrator | 2026-02-04 01:01:30.663905 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-04 01:01:30.663911 | orchestrator | Wednesday 04 February 2026 00:57:57 +0000 (0:00:01.313) 0:09:06.205 **** 2026-02-04 01:01:30.663914 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.663918 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.663921 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.663925 | orchestrator | 2026-02-04 01:01:30.663928 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-04 01:01:30.663932 | orchestrator | Wednesday 04 February 2026 00:57:59 +0000 (0:00:01.696) 0:09:07.902 **** 2026-02-04 01:01:30.663935 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.663939 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.663942 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.663946 | orchestrator | 2026-02-04 01:01:30.663949 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-04 01:01:30.663953 | orchestrator | Wednesday 04 February 2026 00:58:00 +0000 (0:00:01.694) 0:09:09.597 **** 2026-02-04 01:01:30.663956 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663960 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663963 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.663968 | orchestrator | 2026-02-04 01:01:30.663974 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-04 01:01:30.663980 | orchestrator | Wednesday 04 February 2026 00:58:01 +0000 (0:00:00.364) 0:09:09.961 **** 2026-02-04 01:01:30.663986 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.663992 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.663998 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664005 | orchestrator | 2026-02-04 01:01:30.664010 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-04 01:01:30.664016 | orchestrator | Wednesday 04 February 2026 00:58:01 +0000 (0:00:00.380) 0:09:10.341 **** 2026-02-04 01:01:30.664020 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-04 01:01:30.664024 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 01:01:30.664027 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-04 01:01:30.664030 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-04 01:01:30.664034 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-04 01:01:30.664037 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-04 01:01:30.664041 | orchestrator | 2026-02-04 01:01:30.664047 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-04 01:01:30.664052 | orchestrator | Wednesday 04 February 2026 00:58:03 +0000 (0:00:01.541) 0:09:11.883 **** 2026-02-04 01:01:30.664058 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-04 01:01:30.664064 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-04 01:01:30.664069 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-04 01:01:30.664074 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-04 01:01:30.664080 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-04 01:01:30.664085 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-04 01:01:30.664091 | orchestrator | 2026-02-04 01:01:30.664100 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-04 01:01:30.664107 | orchestrator | Wednesday 04 February 2026 00:58:05 +0000 (0:00:02.345) 0:09:14.229 **** 2026-02-04 01:01:30.664112 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-04 01:01:30.664119 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-04 01:01:30.664122 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-04 01:01:30.664126 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-04 01:01:30.664129 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-04 01:01:30.664132 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-04 01:01:30.664136 | orchestrator | 2026-02-04 01:01:30.664139 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-04 01:01:30.664143 | orchestrator | Wednesday 04 February 2026 00:58:09 +0000 (0:00:03.740) 0:09:17.969 **** 2026-02-04 01:01:30.664149 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664155 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664158 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:01:30.664162 | orchestrator | 2026-02-04 01:01:30.664165 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-04 01:01:30.664169 | orchestrator | Wednesday 04 February 2026 00:58:11 +0000 (0:00:02.310) 0:09:20.280 **** 2026-02-04 01:01:30.664172 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664176 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664179 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-04 01:01:30.664183 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:01:30.664186 | orchestrator | 2026-02-04 01:01:30.664190 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-04 01:01:30.664193 | orchestrator | Wednesday 04 February 2026 00:58:24 +0000 (0:00:13.014) 0:09:33.294 **** 2026-02-04 01:01:30.664197 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664200 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664204 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664207 | orchestrator | 2026-02-04 01:01:30.664211 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 01:01:30.664214 | orchestrator | Wednesday 04 February 2026 00:58:25 +0000 (0:00:01.072) 0:09:34.366 **** 2026-02-04 01:01:30.664218 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664221 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664225 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664228 | orchestrator | 2026-02-04 01:01:30.664232 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-04 01:01:30.664235 | orchestrator | Wednesday 04 February 2026 00:58:26 +0000 (0:00:00.926) 0:09:35.293 **** 2026-02-04 01:01:30.664238 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.664242 | orchestrator | 2026-02-04 01:01:30.664245 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-04 01:01:30.664249 | orchestrator | Wednesday 04 February 2026 00:58:27 +0000 (0:00:00.693) 0:09:35.987 **** 2026-02-04 01:01:30.664252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:01:30.664256 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:01:30.664259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:01:30.664263 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664266 | orchestrator | 2026-02-04 01:01:30.664270 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-04 01:01:30.664273 | orchestrator | Wednesday 04 February 2026 00:58:27 +0000 (0:00:00.453) 0:09:36.440 **** 2026-02-04 01:01:30.664277 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664280 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664283 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664287 | orchestrator | 2026-02-04 01:01:30.664290 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-04 01:01:30.664294 | orchestrator | Wednesday 04 February 2026 00:58:28 +0000 (0:00:00.785) 0:09:37.226 **** 2026-02-04 01:01:30.664297 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664301 | orchestrator | 2026-02-04 01:01:30.664304 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-04 01:01:30.664308 | orchestrator | Wednesday 04 February 2026 00:58:28 +0000 (0:00:00.266) 0:09:37.492 **** 2026-02-04 01:01:30.664311 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664315 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664318 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664322 | orchestrator | 2026-02-04 01:01:30.664325 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-04 01:01:30.664328 | orchestrator | Wednesday 04 February 2026 00:58:29 +0000 (0:00:00.394) 0:09:37.886 **** 2026-02-04 01:01:30.664334 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664337 | orchestrator | 2026-02-04 01:01:30.664341 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-04 01:01:30.664344 | orchestrator | Wednesday 04 February 2026 00:58:29 +0000 (0:00:00.266) 0:09:38.153 **** 2026-02-04 01:01:30.664348 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664351 | orchestrator | 2026-02-04 01:01:30.664355 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-04 01:01:30.664358 | orchestrator | Wednesday 04 February 2026 00:58:29 +0000 (0:00:00.256) 0:09:38.409 **** 2026-02-04 01:01:30.664362 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664365 | orchestrator | 2026-02-04 01:01:30.664368 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-04 01:01:30.664372 | orchestrator | Wednesday 04 February 2026 00:58:29 +0000 (0:00:00.140) 0:09:38.550 **** 2026-02-04 01:01:30.664375 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664379 | orchestrator | 2026-02-04 01:01:30.664382 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-04 01:01:30.664386 | orchestrator | Wednesday 04 February 2026 00:58:30 +0000 (0:00:00.263) 0:09:38.813 **** 2026-02-04 01:01:30.664391 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664395 | orchestrator | 2026-02-04 01:01:30.664399 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-04 01:01:30.664402 | orchestrator | Wednesday 04 February 2026 00:58:30 +0000 (0:00:00.303) 0:09:39.117 **** 2026-02-04 01:01:30.664405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:01:30.664409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:01:30.664413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:01:30.664416 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664420 | orchestrator | 2026-02-04 01:01:30.664423 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-04 01:01:30.664427 | orchestrator | Wednesday 04 February 2026 00:58:31 +0000 (0:00:00.892) 0:09:40.009 **** 2026-02-04 01:01:30.664432 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664436 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664439 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664443 | orchestrator | 2026-02-04 01:01:30.664446 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-04 01:01:30.664450 | orchestrator | Wednesday 04 February 2026 00:58:32 +0000 (0:00:00.966) 0:09:40.975 **** 2026-02-04 01:01:30.664453 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664457 | orchestrator | 2026-02-04 01:01:30.664460 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-04 01:01:30.664464 | orchestrator | Wednesday 04 February 2026 00:58:32 +0000 (0:00:00.270) 0:09:41.246 **** 2026-02-04 01:01:30.664467 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664471 | orchestrator | 2026-02-04 01:01:30.664474 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-04 01:01:30.664478 | orchestrator | 2026-02-04 01:01:30.664481 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 01:01:30.664485 | orchestrator | Wednesday 04 February 2026 00:58:33 +0000 (0:00:01.094) 0:09:42.340 **** 2026-02-04 01:01:30.664488 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.664492 | orchestrator | 2026-02-04 01:01:30.664496 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 01:01:30.664499 | orchestrator | Wednesday 04 February 2026 00:58:35 +0000 (0:00:01.610) 0:09:43.951 **** 2026-02-04 01:01:30.664502 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.664508 | orchestrator | 2026-02-04 01:01:30.664512 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 01:01:30.664515 | orchestrator | Wednesday 04 February 2026 00:58:36 +0000 (0:00:01.625) 0:09:45.577 **** 2026-02-04 01:01:30.664519 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.664523 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664526 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.664560 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.664564 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664568 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664571 | orchestrator | 2026-02-04 01:01:30.664575 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 01:01:30.664578 | orchestrator | Wednesday 04 February 2026 00:58:38 +0000 (0:00:01.376) 0:09:46.953 **** 2026-02-04 01:01:30.664582 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.664585 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.664589 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.664592 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.664596 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.664599 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.664603 | orchestrator | 2026-02-04 01:01:30.664606 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 01:01:30.664610 | orchestrator | Wednesday 04 February 2026 00:58:39 +0000 (0:00:01.316) 0:09:48.270 **** 2026-02-04 01:01:30.664614 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.664617 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.664621 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.664624 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.664628 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.664631 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.664635 | orchestrator | 2026-02-04 01:01:30.664638 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 01:01:30.664642 | orchestrator | Wednesday 04 February 2026 00:58:41 +0000 (0:00:01.613) 0:09:49.883 **** 2026-02-04 01:01:30.664645 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.664649 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.664652 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.664656 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.664660 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.664663 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.664667 | orchestrator | 2026-02-04 01:01:30.664670 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 01:01:30.664674 | orchestrator | Wednesday 04 February 2026 00:58:42 +0000 (0:00:01.183) 0:09:51.067 **** 2026-02-04 01:01:30.664677 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664681 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.664684 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664688 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.664691 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.664695 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664698 | orchestrator | 2026-02-04 01:01:30.664702 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 01:01:30.664705 | orchestrator | Wednesday 04 February 2026 00:58:43 +0000 (0:00:01.249) 0:09:52.317 **** 2026-02-04 01:01:30.664709 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.664712 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.664716 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.664720 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664723 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664729 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664732 | orchestrator | 2026-02-04 01:01:30.664736 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 01:01:30.664740 | orchestrator | Wednesday 04 February 2026 00:58:44 +0000 (0:00:00.853) 0:09:53.170 **** 2026-02-04 01:01:30.664746 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.664749 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.664753 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.664756 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664760 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664763 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664767 | orchestrator | 2026-02-04 01:01:30.664770 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 01:01:30.664774 | orchestrator | Wednesday 04 February 2026 00:58:45 +0000 (0:00:01.270) 0:09:54.441 **** 2026-02-04 01:01:30.664778 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.664781 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.664785 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.664788 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.664792 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.664795 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.664799 | orchestrator | 2026-02-04 01:01:30.664802 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 01:01:30.664806 | orchestrator | Wednesday 04 February 2026 00:58:46 +0000 (0:00:01.223) 0:09:55.664 **** 2026-02-04 01:01:30.664809 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.664813 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.664816 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.664820 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.664823 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.664827 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.664830 | orchestrator | 2026-02-04 01:01:30.664834 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 01:01:30.664837 | orchestrator | Wednesday 04 February 2026 00:58:48 +0000 (0:00:01.519) 0:09:57.184 **** 2026-02-04 01:01:30.664841 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.664844 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.664848 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.664851 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664854 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664858 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664861 | orchestrator | 2026-02-04 01:01:30.664865 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 01:01:30.664868 | orchestrator | Wednesday 04 February 2026 00:58:49 +0000 (0:00:00.697) 0:09:57.881 **** 2026-02-04 01:01:30.664872 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.664875 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.664879 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.664882 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.664886 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.664889 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.664893 | orchestrator | 2026-02-04 01:01:30.664896 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 01:01:30.664900 | orchestrator | Wednesday 04 February 2026 00:58:50 +0000 (0:00:01.004) 0:09:58.885 **** 2026-02-04 01:01:30.664903 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.664907 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.664910 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.664913 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.664917 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.664920 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.664924 | orchestrator | 2026-02-04 01:01:30.664927 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 01:01:30.664931 | orchestrator | Wednesday 04 February 2026 00:58:50 +0000 (0:00:00.736) 0:09:59.622 **** 2026-02-04 01:01:30.664934 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.664938 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.664941 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.664945 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.664952 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.664956 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.664959 | orchestrator | 2026-02-04 01:01:30.664963 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 01:01:30.664966 | orchestrator | Wednesday 04 February 2026 00:58:51 +0000 (0:00:00.659) 0:10:00.281 **** 2026-02-04 01:01:30.664970 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.664973 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.664977 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.664980 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.664984 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.664987 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.664990 | orchestrator | 2026-02-04 01:01:30.664994 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 01:01:30.664997 | orchestrator | Wednesday 04 February 2026 00:58:52 +0000 (0:00:01.052) 0:10:01.334 **** 2026-02-04 01:01:30.665001 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.665004 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.665008 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.665011 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665015 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665018 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665022 | orchestrator | 2026-02-04 01:01:30.665025 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 01:01:30.665029 | orchestrator | Wednesday 04 February 2026 00:58:53 +0000 (0:00:00.681) 0:10:02.016 **** 2026-02-04 01:01:30.665032 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:30.665036 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:30.665039 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:30.665043 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665046 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665049 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665053 | orchestrator | 2026-02-04 01:01:30.665056 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 01:01:30.665060 | orchestrator | Wednesday 04 February 2026 00:58:54 +0000 (0:00:01.064) 0:10:03.080 **** 2026-02-04 01:01:30.665064 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.665067 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.665071 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.665074 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665079 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665083 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665087 | orchestrator | 2026-02-04 01:01:30.665090 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 01:01:30.665094 | orchestrator | Wednesday 04 February 2026 00:58:55 +0000 (0:00:00.694) 0:10:03.774 **** 2026-02-04 01:01:30.665097 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.665100 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.665122 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.665127 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665133 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665139 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665144 | orchestrator | 2026-02-04 01:01:30.665150 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 01:01:30.665155 | orchestrator | Wednesday 04 February 2026 00:58:56 +0000 (0:00:01.174) 0:10:04.949 **** 2026-02-04 01:01:30.665162 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.665167 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.665173 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.665178 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665183 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665189 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665195 | orchestrator | 2026-02-04 01:01:30.665200 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-04 01:01:30.665206 | orchestrator | Wednesday 04 February 2026 00:58:57 +0000 (0:00:01.623) 0:10:06.572 **** 2026-02-04 01:01:30.665216 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.665221 | orchestrator | 2026-02-04 01:01:30.665228 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-04 01:01:30.665231 | orchestrator | Wednesday 04 February 2026 00:59:01 +0000 (0:00:04.172) 0:10:10.745 **** 2026-02-04 01:01:30.665234 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.665237 | orchestrator | 2026-02-04 01:01:30.665241 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-04 01:01:30.665244 | orchestrator | Wednesday 04 February 2026 00:59:04 +0000 (0:00:02.200) 0:10:12.945 **** 2026-02-04 01:01:30.665247 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.665250 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.665254 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.665257 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.665260 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.665263 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.665266 | orchestrator | 2026-02-04 01:01:30.665270 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-04 01:01:30.665273 | orchestrator | Wednesday 04 February 2026 00:59:06 +0000 (0:00:02.032) 0:10:14.977 **** 2026-02-04 01:01:30.665276 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.665279 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.665282 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.665286 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.665289 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.665292 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.665295 | orchestrator | 2026-02-04 01:01:30.665298 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-04 01:01:30.665302 | orchestrator | Wednesday 04 February 2026 00:59:07 +0000 (0:00:01.143) 0:10:16.121 **** 2026-02-04 01:01:30.665305 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.665309 | orchestrator | 2026-02-04 01:01:30.665312 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-04 01:01:30.665316 | orchestrator | Wednesday 04 February 2026 00:59:08 +0000 (0:00:01.459) 0:10:17.581 **** 2026-02-04 01:01:30.665321 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.665325 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.665328 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.665331 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.665334 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.665337 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.665340 | orchestrator | 2026-02-04 01:01:30.665344 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-04 01:01:30.665347 | orchestrator | Wednesday 04 February 2026 00:59:10 +0000 (0:00:02.111) 0:10:19.692 **** 2026-02-04 01:01:30.665350 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.665353 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.665356 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.665360 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.665363 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.665366 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.665369 | orchestrator | 2026-02-04 01:01:30.665372 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-04 01:01:30.665376 | orchestrator | Wednesday 04 February 2026 00:59:15 +0000 (0:00:04.254) 0:10:23.946 **** 2026-02-04 01:01:30.665379 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.665382 | orchestrator | 2026-02-04 01:01:30.665385 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-04 01:01:30.665388 | orchestrator | Wednesday 04 February 2026 00:59:16 +0000 (0:00:01.620) 0:10:25.567 **** 2026-02-04 01:01:30.665394 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.665397 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.665400 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.665404 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665407 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665410 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665413 | orchestrator | 2026-02-04 01:01:30.665417 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-04 01:01:30.665420 | orchestrator | Wednesday 04 February 2026 00:59:17 +0000 (0:00:00.741) 0:10:26.309 **** 2026-02-04 01:01:30.665423 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:30.665426 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:30.665430 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:30.665433 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.665436 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.665443 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.665446 | orchestrator | 2026-02-04 01:01:30.665449 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-04 01:01:30.665453 | orchestrator | Wednesday 04 February 2026 00:59:20 +0000 (0:00:02.770) 0:10:29.080 **** 2026-02-04 01:01:30.665456 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:30.665459 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:30.665462 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:30.665466 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665469 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665472 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665475 | orchestrator | 2026-02-04 01:01:30.665479 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-04 01:01:30.665482 | orchestrator | 2026-02-04 01:01:30.665485 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 01:01:30.665490 | orchestrator | Wednesday 04 February 2026 00:59:21 +0000 (0:00:01.283) 0:10:30.363 **** 2026-02-04 01:01:30.665494 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.665497 | orchestrator | 2026-02-04 01:01:30.665500 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 01:01:30.665503 | orchestrator | Wednesday 04 February 2026 00:59:22 +0000 (0:00:00.618) 0:10:30.982 **** 2026-02-04 01:01:30.665507 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.665510 | orchestrator | 2026-02-04 01:01:30.665513 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 01:01:30.665516 | orchestrator | Wednesday 04 February 2026 00:59:23 +0000 (0:00:00.956) 0:10:31.938 **** 2026-02-04 01:01:30.665520 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665523 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665526 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665539 | orchestrator | 2026-02-04 01:01:30.665543 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 01:01:30.665546 | orchestrator | Wednesday 04 February 2026 00:59:23 +0000 (0:00:00.387) 0:10:32.326 **** 2026-02-04 01:01:30.665549 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665552 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665556 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665559 | orchestrator | 2026-02-04 01:01:30.665562 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 01:01:30.665566 | orchestrator | Wednesday 04 February 2026 00:59:24 +0000 (0:00:00.891) 0:10:33.217 **** 2026-02-04 01:01:30.665569 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665572 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665575 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665578 | orchestrator | 2026-02-04 01:01:30.665582 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 01:01:30.665588 | orchestrator | Wednesday 04 February 2026 00:59:25 +0000 (0:00:00.931) 0:10:34.148 **** 2026-02-04 01:01:30.665591 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665594 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665597 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665601 | orchestrator | 2026-02-04 01:01:30.665604 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 01:01:30.665607 | orchestrator | Wednesday 04 February 2026 00:59:26 +0000 (0:00:01.159) 0:10:35.308 **** 2026-02-04 01:01:30.665611 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665614 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665617 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665620 | orchestrator | 2026-02-04 01:01:30.665623 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 01:01:30.665627 | orchestrator | Wednesday 04 February 2026 00:59:26 +0000 (0:00:00.450) 0:10:35.759 **** 2026-02-04 01:01:30.665630 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665633 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665636 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665640 | orchestrator | 2026-02-04 01:01:30.665643 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 01:01:30.665646 | orchestrator | Wednesday 04 February 2026 00:59:27 +0000 (0:00:00.412) 0:10:36.171 **** 2026-02-04 01:01:30.665649 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665652 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665656 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665659 | orchestrator | 2026-02-04 01:01:30.665662 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 01:01:30.665665 | orchestrator | Wednesday 04 February 2026 00:59:27 +0000 (0:00:00.325) 0:10:36.496 **** 2026-02-04 01:01:30.665669 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665672 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665675 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665678 | orchestrator | 2026-02-04 01:01:30.665682 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 01:01:30.665685 | orchestrator | Wednesday 04 February 2026 00:59:29 +0000 (0:00:01.408) 0:10:37.904 **** 2026-02-04 01:01:30.665688 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665691 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665695 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665698 | orchestrator | 2026-02-04 01:01:30.665701 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 01:01:30.665705 | orchestrator | Wednesday 04 February 2026 00:59:30 +0000 (0:00:00.880) 0:10:38.785 **** 2026-02-04 01:01:30.665708 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665711 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665714 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665718 | orchestrator | 2026-02-04 01:01:30.665721 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 01:01:30.665724 | orchestrator | Wednesday 04 February 2026 00:59:30 +0000 (0:00:00.425) 0:10:39.210 **** 2026-02-04 01:01:30.665727 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665731 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665734 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665737 | orchestrator | 2026-02-04 01:01:30.665740 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 01:01:30.665746 | orchestrator | Wednesday 04 February 2026 00:59:30 +0000 (0:00:00.326) 0:10:39.537 **** 2026-02-04 01:01:30.665749 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665752 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665756 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665759 | orchestrator | 2026-02-04 01:01:30.665762 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 01:01:30.665766 | orchestrator | Wednesday 04 February 2026 00:59:31 +0000 (0:00:00.752) 0:10:40.289 **** 2026-02-04 01:01:30.665771 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665774 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665778 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665781 | orchestrator | 2026-02-04 01:01:30.665784 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 01:01:30.665787 | orchestrator | Wednesday 04 February 2026 00:59:31 +0000 (0:00:00.363) 0:10:40.652 **** 2026-02-04 01:01:30.665793 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665797 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665800 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665803 | orchestrator | 2026-02-04 01:01:30.665807 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 01:01:30.665810 | orchestrator | Wednesday 04 February 2026 00:59:32 +0000 (0:00:00.372) 0:10:41.024 **** 2026-02-04 01:01:30.665813 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665816 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665820 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665823 | orchestrator | 2026-02-04 01:01:30.665826 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 01:01:30.665829 | orchestrator | Wednesday 04 February 2026 00:59:32 +0000 (0:00:00.355) 0:10:41.380 **** 2026-02-04 01:01:30.665833 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665836 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665839 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665842 | orchestrator | 2026-02-04 01:01:30.665846 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 01:01:30.665849 | orchestrator | Wednesday 04 February 2026 00:59:33 +0000 (0:00:00.690) 0:10:42.071 **** 2026-02-04 01:01:30.665852 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665856 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665859 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665862 | orchestrator | 2026-02-04 01:01:30.665865 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 01:01:30.665869 | orchestrator | Wednesday 04 February 2026 00:59:33 +0000 (0:00:00.353) 0:10:42.425 **** 2026-02-04 01:01:30.665872 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665875 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665879 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665882 | orchestrator | 2026-02-04 01:01:30.665885 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 01:01:30.665888 | orchestrator | Wednesday 04 February 2026 00:59:34 +0000 (0:00:00.356) 0:10:42.781 **** 2026-02-04 01:01:30.665892 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.665895 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.665898 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.665902 | orchestrator | 2026-02-04 01:01:30.665905 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-04 01:01:30.665908 | orchestrator | Wednesday 04 February 2026 00:59:34 +0000 (0:00:00.965) 0:10:43.746 **** 2026-02-04 01:01:30.665911 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.665915 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.665918 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-04 01:01:30.665921 | orchestrator | 2026-02-04 01:01:30.665925 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-04 01:01:30.665928 | orchestrator | Wednesday 04 February 2026 00:59:35 +0000 (0:00:00.479) 0:10:44.226 **** 2026-02-04 01:01:30.665931 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:01:30.665934 | orchestrator | 2026-02-04 01:01:30.665938 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-04 01:01:30.665941 | orchestrator | Wednesday 04 February 2026 00:59:37 +0000 (0:00:02.259) 0:10:46.486 **** 2026-02-04 01:01:30.665945 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-04 01:01:30.665952 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.665955 | orchestrator | 2026-02-04 01:01:30.665959 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-04 01:01:30.665962 | orchestrator | Wednesday 04 February 2026 00:59:37 +0000 (0:00:00.229) 0:10:46.715 **** 2026-02-04 01:01:30.665966 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 01:01:30.665973 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 01:01:30.665976 | orchestrator | 2026-02-04 01:01:30.665979 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-04 01:01:30.665983 | orchestrator | Wednesday 04 February 2026 00:59:46 +0000 (0:00:08.719) 0:10:55.435 **** 2026-02-04 01:01:30.665986 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:01:30.665989 | orchestrator | 2026-02-04 01:01:30.665993 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-04 01:01:30.665998 | orchestrator | Wednesday 04 February 2026 00:59:50 +0000 (0:00:03.742) 0:10:59.177 **** 2026-02-04 01:01:30.666001 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.666004 | orchestrator | 2026-02-04 01:01:30.666008 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-04 01:01:30.666040 | orchestrator | Wednesday 04 February 2026 00:59:51 +0000 (0:00:00.975) 0:11:00.153 **** 2026-02-04 01:01:30.666044 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-04 01:01:30.666048 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-04 01:01:30.666051 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-04 01:01:30.666056 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-04 01:01:30.666060 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-04 01:01:30.666063 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-04 01:01:30.666066 | orchestrator | 2026-02-04 01:01:30.666069 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-04 01:01:30.666073 | orchestrator | Wednesday 04 February 2026 00:59:52 +0000 (0:00:01.433) 0:11:01.586 **** 2026-02-04 01:01:30.666076 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.666079 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 01:01:30.666082 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 01:01:30.666085 | orchestrator | 2026-02-04 01:01:30.666089 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-04 01:01:30.666092 | orchestrator | Wednesday 04 February 2026 00:59:55 +0000 (0:00:02.368) 0:11:03.955 **** 2026-02-04 01:01:30.666095 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 01:01:30.666098 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 01:01:30.666102 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.666105 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 01:01:30.666108 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-04 01:01:30.666111 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.666115 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 01:01:30.666118 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-04 01:01:30.666124 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.666127 | orchestrator | 2026-02-04 01:01:30.666130 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-04 01:01:30.666133 | orchestrator | Wednesday 04 February 2026 00:59:56 +0000 (0:00:01.310) 0:11:05.265 **** 2026-02-04 01:01:30.666137 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.666140 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.666143 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.666146 | orchestrator | 2026-02-04 01:01:30.666150 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-04 01:01:30.666153 | orchestrator | Wednesday 04 February 2026 00:59:59 +0000 (0:00:03.190) 0:11:08.455 **** 2026-02-04 01:01:30.666156 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666160 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.666163 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.666166 | orchestrator | 2026-02-04 01:01:30.666169 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-04 01:01:30.666172 | orchestrator | Wednesday 04 February 2026 01:00:00 +0000 (0:00:00.375) 0:11:08.830 **** 2026-02-04 01:01:30.666176 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.666179 | orchestrator | 2026-02-04 01:01:30.666182 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-04 01:01:30.666185 | orchestrator | Wednesday 04 February 2026 01:00:00 +0000 (0:00:00.653) 0:11:09.484 **** 2026-02-04 01:01:30.666189 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.666192 | orchestrator | 2026-02-04 01:01:30.666195 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-04 01:01:30.666198 | orchestrator | Wednesday 04 February 2026 01:00:01 +0000 (0:00:00.934) 0:11:10.419 **** 2026-02-04 01:01:30.666202 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.666205 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.666208 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.666211 | orchestrator | 2026-02-04 01:01:30.666215 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-04 01:01:30.666218 | orchestrator | Wednesday 04 February 2026 01:00:03 +0000 (0:00:01.770) 0:11:12.189 **** 2026-02-04 01:01:30.666221 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.666224 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.666228 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.666231 | orchestrator | 2026-02-04 01:01:30.666234 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-04 01:01:30.666237 | orchestrator | Wednesday 04 February 2026 01:00:04 +0000 (0:00:01.324) 0:11:13.514 **** 2026-02-04 01:01:30.666241 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.666244 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.666247 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.666250 | orchestrator | 2026-02-04 01:01:30.666253 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-04 01:01:30.666257 | orchestrator | Wednesday 04 February 2026 01:00:06 +0000 (0:00:02.241) 0:11:15.755 **** 2026-02-04 01:01:30.666260 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.666263 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.666266 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.666270 | orchestrator | 2026-02-04 01:01:30.666273 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-04 01:01:30.666279 | orchestrator | Wednesday 04 February 2026 01:00:09 +0000 (0:00:02.311) 0:11:18.066 **** 2026-02-04 01:01:30.666282 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666285 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666289 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666292 | orchestrator | 2026-02-04 01:01:30.666295 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 01:01:30.666301 | orchestrator | Wednesday 04 February 2026 01:00:11 +0000 (0:00:01.722) 0:11:19.789 **** 2026-02-04 01:01:30.666304 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.666307 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.666311 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.666314 | orchestrator | 2026-02-04 01:01:30.666317 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-04 01:01:30.666320 | orchestrator | Wednesday 04 February 2026 01:00:11 +0000 (0:00:00.882) 0:11:20.672 **** 2026-02-04 01:01:30.666325 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.666329 | orchestrator | 2026-02-04 01:01:30.666332 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-04 01:01:30.666335 | orchestrator | Wednesday 04 February 2026 01:00:12 +0000 (0:00:00.642) 0:11:21.314 **** 2026-02-04 01:01:30.666339 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666342 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666345 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666348 | orchestrator | 2026-02-04 01:01:30.666352 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-04 01:01:30.666355 | orchestrator | Wednesday 04 February 2026 01:00:13 +0000 (0:00:00.810) 0:11:22.125 **** 2026-02-04 01:01:30.666358 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.666361 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.666365 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.666368 | orchestrator | 2026-02-04 01:01:30.666371 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-04 01:01:30.666374 | orchestrator | Wednesday 04 February 2026 01:00:14 +0000 (0:00:01.467) 0:11:23.592 **** 2026-02-04 01:01:30.666377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:01:30.666381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:01:30.666384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:01:30.666387 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666390 | orchestrator | 2026-02-04 01:01:30.666394 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-04 01:01:30.666397 | orchestrator | Wednesday 04 February 2026 01:00:15 +0000 (0:00:00.927) 0:11:24.520 **** 2026-02-04 01:01:30.666400 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666403 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666407 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666410 | orchestrator | 2026-02-04 01:01:30.666413 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-04 01:01:30.666416 | orchestrator | 2026-02-04 01:01:30.666420 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 01:01:30.666423 | orchestrator | Wednesday 04 February 2026 01:00:16 +0000 (0:00:00.717) 0:11:25.237 **** 2026-02-04 01:01:30.666426 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.666429 | orchestrator | 2026-02-04 01:01:30.666432 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 01:01:30.666436 | orchestrator | Wednesday 04 February 2026 01:00:17 +0000 (0:00:01.156) 0:11:26.394 **** 2026-02-04 01:01:30.666439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.666442 | orchestrator | 2026-02-04 01:01:30.666445 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 01:01:30.666449 | orchestrator | Wednesday 04 February 2026 01:00:18 +0000 (0:00:00.542) 0:11:26.936 **** 2026-02-04 01:01:30.666452 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666455 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.666458 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.666464 | orchestrator | 2026-02-04 01:01:30.666467 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 01:01:30.666470 | orchestrator | Wednesday 04 February 2026 01:00:18 +0000 (0:00:00.657) 0:11:27.594 **** 2026-02-04 01:01:30.666474 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666477 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666480 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666484 | orchestrator | 2026-02-04 01:01:30.666487 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 01:01:30.666490 | orchestrator | Wednesday 04 February 2026 01:00:19 +0000 (0:00:00.950) 0:11:28.544 **** 2026-02-04 01:01:30.666493 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666497 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666500 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666503 | orchestrator | 2026-02-04 01:01:30.666506 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 01:01:30.666510 | orchestrator | Wednesday 04 February 2026 01:00:20 +0000 (0:00:01.184) 0:11:29.728 **** 2026-02-04 01:01:30.666513 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666516 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666519 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666522 | orchestrator | 2026-02-04 01:01:30.666526 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 01:01:30.666539 | orchestrator | Wednesday 04 February 2026 01:00:21 +0000 (0:00:00.945) 0:11:30.674 **** 2026-02-04 01:01:30.666542 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666546 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.666549 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.666552 | orchestrator | 2026-02-04 01:01:30.666555 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 01:01:30.666559 | orchestrator | Wednesday 04 February 2026 01:00:22 +0000 (0:00:00.786) 0:11:31.460 **** 2026-02-04 01:01:30.666562 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666567 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.666571 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.666574 | orchestrator | 2026-02-04 01:01:30.666577 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 01:01:30.666580 | orchestrator | Wednesday 04 February 2026 01:00:23 +0000 (0:00:00.450) 0:11:31.911 **** 2026-02-04 01:01:30.666584 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666587 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.666590 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.666594 | orchestrator | 2026-02-04 01:01:30.666597 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 01:01:30.666600 | orchestrator | Wednesday 04 February 2026 01:00:23 +0000 (0:00:00.439) 0:11:32.350 **** 2026-02-04 01:01:30.666603 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666607 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666613 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666616 | orchestrator | 2026-02-04 01:01:30.666620 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 01:01:30.666623 | orchestrator | Wednesday 04 February 2026 01:00:24 +0000 (0:00:01.054) 0:11:33.405 **** 2026-02-04 01:01:30.666626 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666630 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666633 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666636 | orchestrator | 2026-02-04 01:01:30.666639 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 01:01:30.666643 | orchestrator | Wednesday 04 February 2026 01:00:26 +0000 (0:00:01.516) 0:11:34.921 **** 2026-02-04 01:01:30.666646 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666649 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.666652 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.666656 | orchestrator | 2026-02-04 01:01:30.666659 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 01:01:30.666664 | orchestrator | Wednesday 04 February 2026 01:00:26 +0000 (0:00:00.362) 0:11:35.283 **** 2026-02-04 01:01:30.666668 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666671 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.666674 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.666677 | orchestrator | 2026-02-04 01:01:30.666681 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 01:01:30.666684 | orchestrator | Wednesday 04 February 2026 01:00:26 +0000 (0:00:00.323) 0:11:35.607 **** 2026-02-04 01:01:30.666687 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666690 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666694 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666697 | orchestrator | 2026-02-04 01:01:30.666700 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 01:01:30.666703 | orchestrator | Wednesday 04 February 2026 01:00:27 +0000 (0:00:00.346) 0:11:35.954 **** 2026-02-04 01:01:30.666706 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666710 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666713 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666716 | orchestrator | 2026-02-04 01:01:30.666719 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 01:01:30.666723 | orchestrator | Wednesday 04 February 2026 01:00:27 +0000 (0:00:00.614) 0:11:36.569 **** 2026-02-04 01:01:30.666726 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666729 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666732 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666735 | orchestrator | 2026-02-04 01:01:30.666739 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 01:01:30.666742 | orchestrator | Wednesday 04 February 2026 01:00:28 +0000 (0:00:00.398) 0:11:36.967 **** 2026-02-04 01:01:30.666745 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666749 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.666752 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.666755 | orchestrator | 2026-02-04 01:01:30.666758 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 01:01:30.666762 | orchestrator | Wednesday 04 February 2026 01:00:28 +0000 (0:00:00.301) 0:11:37.269 **** 2026-02-04 01:01:30.666765 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666768 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.666771 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.666775 | orchestrator | 2026-02-04 01:01:30.666778 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 01:01:30.666781 | orchestrator | Wednesday 04 February 2026 01:00:28 +0000 (0:00:00.313) 0:11:37.582 **** 2026-02-04 01:01:30.666784 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666788 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.666791 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.666794 | orchestrator | 2026-02-04 01:01:30.666797 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 01:01:30.666801 | orchestrator | Wednesday 04 February 2026 01:00:29 +0000 (0:00:00.520) 0:11:38.103 **** 2026-02-04 01:01:30.666804 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666807 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666810 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666814 | orchestrator | 2026-02-04 01:01:30.666817 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 01:01:30.666820 | orchestrator | Wednesday 04 February 2026 01:00:29 +0000 (0:00:00.306) 0:11:38.409 **** 2026-02-04 01:01:30.666823 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.666827 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.666830 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.666833 | orchestrator | 2026-02-04 01:01:30.666836 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-04 01:01:30.666840 | orchestrator | Wednesday 04 February 2026 01:00:30 +0000 (0:00:00.526) 0:11:38.936 **** 2026-02-04 01:01:30.666846 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.666849 | orchestrator | 2026-02-04 01:01:30.666852 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-04 01:01:30.666855 | orchestrator | Wednesday 04 February 2026 01:00:31 +0000 (0:00:00.985) 0:11:39.921 **** 2026-02-04 01:01:30.666859 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.666864 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 01:01:30.666867 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 01:01:30.666870 | orchestrator | 2026-02-04 01:01:30.666874 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-04 01:01:30.666877 | orchestrator | Wednesday 04 February 2026 01:00:33 +0000 (0:00:02.048) 0:11:41.970 **** 2026-02-04 01:01:30.666880 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 01:01:30.666883 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 01:01:30.666886 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.666890 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 01:01:30.666893 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-04 01:01:30.666896 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.666899 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 01:01:30.666905 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-04 01:01:30.666908 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.666911 | orchestrator | 2026-02-04 01:01:30.666914 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-04 01:01:30.666918 | orchestrator | Wednesday 04 February 2026 01:00:34 +0000 (0:00:01.097) 0:11:43.067 **** 2026-02-04 01:01:30.666921 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.666924 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.666927 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.666931 | orchestrator | 2026-02-04 01:01:30.666934 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-04 01:01:30.666937 | orchestrator | Wednesday 04 February 2026 01:00:34 +0000 (0:00:00.500) 0:11:43.568 **** 2026-02-04 01:01:30.666940 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.666944 | orchestrator | 2026-02-04 01:01:30.666947 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-04 01:01:30.666950 | orchestrator | Wednesday 04 February 2026 01:00:35 +0000 (0:00:00.607) 0:11:44.175 **** 2026-02-04 01:01:30.666954 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.666957 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.666961 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.666964 | orchestrator | 2026-02-04 01:01:30.666967 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-04 01:01:30.666970 | orchestrator | Wednesday 04 February 2026 01:00:36 +0000 (0:00:00.837) 0:11:45.013 **** 2026-02-04 01:01:30.666974 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.666977 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-04 01:01:30.666980 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.666983 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-04 01:01:30.666989 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.666992 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-04 01:01:30.666995 | orchestrator | 2026-02-04 01:01:30.666998 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-04 01:01:30.667002 | orchestrator | Wednesday 04 February 2026 01:00:40 +0000 (0:00:04.563) 0:11:49.577 **** 2026-02-04 01:01:30.667005 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.667008 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 01:01:30.667011 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.667015 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 01:01:30.667018 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:01:30.667021 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 01:01:30.667024 | orchestrator | 2026-02-04 01:01:30.667027 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-04 01:01:30.667031 | orchestrator | Wednesday 04 February 2026 01:00:43 +0000 (0:00:02.194) 0:11:51.771 **** 2026-02-04 01:01:30.667034 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 01:01:30.667037 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.667040 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 01:01:30.667044 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.667047 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 01:01:30.667050 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.667053 | orchestrator | 2026-02-04 01:01:30.667057 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-04 01:01:30.667060 | orchestrator | Wednesday 04 February 2026 01:00:44 +0000 (0:00:01.334) 0:11:53.105 **** 2026-02-04 01:01:30.667063 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-04 01:01:30.667066 | orchestrator | 2026-02-04 01:01:30.667071 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-04 01:01:30.667075 | orchestrator | Wednesday 04 February 2026 01:00:44 +0000 (0:00:00.263) 0:11:53.369 **** 2026-02-04 01:01:30.667078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 01:01:30.667081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 01:01:30.667085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 01:01:30.667090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 01:01:30.667093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 01:01:30.667096 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.667100 | orchestrator | 2026-02-04 01:01:30.667103 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-04 01:01:30.667106 | orchestrator | Wednesday 04 February 2026 01:00:45 +0000 (0:00:01.214) 0:11:54.584 **** 2026-02-04 01:01:30.667109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 01:01:30.667113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 01:01:30.667116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 01:01:30.667121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 01:01:30.667124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 01:01:30.667128 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.667131 | orchestrator | 2026-02-04 01:01:30.667134 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-04 01:01:30.667137 | orchestrator | Wednesday 04 February 2026 01:00:46 +0000 (0:00:01.004) 0:11:55.589 **** 2026-02-04 01:01:30.667141 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 01:01:30.667144 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 01:01:30.667147 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 01:01:30.667151 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 01:01:30.667154 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 01:01:30.667157 | orchestrator | 2026-02-04 01:01:30.667161 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-04 01:01:30.667164 | orchestrator | Wednesday 04 February 2026 01:01:14 +0000 (0:00:28.109) 0:12:23.699 **** 2026-02-04 01:01:30.667167 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.667170 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.667175 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.667179 | orchestrator | 2026-02-04 01:01:30.667182 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-04 01:01:30.667185 | orchestrator | Wednesday 04 February 2026 01:01:15 +0000 (0:00:00.673) 0:12:24.373 **** 2026-02-04 01:01:30.667189 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.667192 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.667195 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.667198 | orchestrator | 2026-02-04 01:01:30.667202 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-04 01:01:30.667205 | orchestrator | Wednesday 04 February 2026 01:01:16 +0000 (0:00:00.449) 0:12:24.822 **** 2026-02-04 01:01:30.667208 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.667211 | orchestrator | 2026-02-04 01:01:30.667214 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-04 01:01:30.667218 | orchestrator | Wednesday 04 February 2026 01:01:16 +0000 (0:00:00.620) 0:12:25.443 **** 2026-02-04 01:01:30.667221 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.667224 | orchestrator | 2026-02-04 01:01:30.667227 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-04 01:01:30.667230 | orchestrator | Wednesday 04 February 2026 01:01:17 +0000 (0:00:00.881) 0:12:26.324 **** 2026-02-04 01:01:30.667234 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.667237 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.667242 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.667246 | orchestrator | 2026-02-04 01:01:30.667249 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-04 01:01:30.667252 | orchestrator | Wednesday 04 February 2026 01:01:18 +0000 (0:00:01.410) 0:12:27.735 **** 2026-02-04 01:01:30.667258 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.667261 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.667264 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.667268 | orchestrator | 2026-02-04 01:01:30.667271 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-04 01:01:30.667274 | orchestrator | Wednesday 04 February 2026 01:01:20 +0000 (0:00:01.266) 0:12:29.001 **** 2026-02-04 01:01:30.667277 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:01:30.667281 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:01:30.667284 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:01:30.667287 | orchestrator | 2026-02-04 01:01:30.667292 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-04 01:01:30.667295 | orchestrator | Wednesday 04 February 2026 01:01:22 +0000 (0:00:02.414) 0:12:31.416 **** 2026-02-04 01:01:30.667299 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.667302 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.667305 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 01:01:30.667309 | orchestrator | 2026-02-04 01:01:30.667312 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 01:01:30.667315 | orchestrator | Wednesday 04 February 2026 01:01:25 +0000 (0:00:02.675) 0:12:34.091 **** 2026-02-04 01:01:30.667318 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.667322 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.667325 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.667328 | orchestrator | 2026-02-04 01:01:30.667331 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-04 01:01:30.667335 | orchestrator | Wednesday 04 February 2026 01:01:26 +0000 (0:00:00.950) 0:12:35.042 **** 2026-02-04 01:01:30.667338 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:01:30.667341 | orchestrator | 2026-02-04 01:01:30.667344 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-04 01:01:30.667348 | orchestrator | Wednesday 04 February 2026 01:01:26 +0000 (0:00:00.623) 0:12:35.665 **** 2026-02-04 01:01:30.667351 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.667354 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.667357 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.667361 | orchestrator | 2026-02-04 01:01:30.667364 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-04 01:01:30.667367 | orchestrator | Wednesday 04 February 2026 01:01:27 +0000 (0:00:00.354) 0:12:36.019 **** 2026-02-04 01:01:30.667370 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.667374 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:01:30.667377 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:01:30.667380 | orchestrator | 2026-02-04 01:01:30.667383 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-04 01:01:30.667387 | orchestrator | Wednesday 04 February 2026 01:01:28 +0000 (0:00:01.068) 0:12:37.088 **** 2026-02-04 01:01:30.667390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:01:30.667393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:01:30.667396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:01:30.667400 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:01:30.667403 | orchestrator | 2026-02-04 01:01:30.667406 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-04 01:01:30.667409 | orchestrator | Wednesday 04 February 2026 01:01:29 +0000 (0:00:00.688) 0:12:37.777 **** 2026-02-04 01:01:30.667413 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:01:30.667416 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:01:30.667421 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:01:30.667424 | orchestrator | 2026-02-04 01:01:30.667428 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:01:30.667431 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2026-02-04 01:01:30.667434 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-04 01:01:30.667438 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-04 01:01:30.667441 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2026-02-04 01:01:30.667444 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-04 01:01:30.667448 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-04 01:01:30.667451 | orchestrator | 2026-02-04 01:01:30.667454 | orchestrator | 2026-02-04 01:01:30.667458 | orchestrator | 2026-02-04 01:01:30.667461 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:01:30.667466 | orchestrator | Wednesday 04 February 2026 01:01:29 +0000 (0:00:00.277) 0:12:38.055 **** 2026-02-04 01:01:30.667470 | orchestrator | =============================================================================== 2026-02-04 01:01:30.667473 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.45s 2026-02-04 01:01:30.667476 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 38.59s 2026-02-04 01:01:30.667480 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.59s 2026-02-04 01:01:30.667483 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 28.11s 2026-02-04 01:01:30.667486 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.47s 2026-02-04 01:01:30.667489 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.01s 2026-02-04 01:01:30.667494 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.68s 2026-02-04 01:01:30.667498 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.96s 2026-02-04 01:01:30.667501 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.72s 2026-02-04 01:01:30.667504 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.97s 2026-02-04 01:01:30.667507 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.48s 2026-02-04 01:01:30.667511 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.91s 2026-02-04 01:01:30.667514 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.49s 2026-02-04 01:01:30.667517 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.79s 2026-02-04 01:01:30.667520 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.56s 2026-02-04 01:01:30.667523 | orchestrator | ceph-mon : Generate systemd unit file for mon container ----------------- 4.47s 2026-02-04 01:01:30.667527 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 4.34s 2026-02-04 01:01:30.667553 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.25s 2026-02-04 01:01:30.667556 | orchestrator | ceph-container-common : Enable ceph.target ------------------------------ 4.24s 2026-02-04 01:01:30.667560 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.17s 2026-02-04 01:01:30.667563 | orchestrator | 2026-02-04 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:33.696643 | orchestrator | 2026-02-04 01:01:33 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:33.699979 | orchestrator | 2026-02-04 01:01:33 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:01:33.704192 | orchestrator | 2026-02-04 01:01:33 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:33.704729 | orchestrator | 2026-02-04 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:36.753656 | orchestrator | 2026-02-04 01:01:36 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:36.754488 | orchestrator | 2026-02-04 01:01:36 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:01:36.756457 | orchestrator | 2026-02-04 01:01:36 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:36.756502 | orchestrator | 2026-02-04 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:39.792417 | orchestrator | 2026-02-04 01:01:39 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:39.795614 | orchestrator | 2026-02-04 01:01:39 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:01:39.797175 | orchestrator | 2026-02-04 01:01:39 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:39.797243 | orchestrator | 2026-02-04 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:42.841167 | orchestrator | 2026-02-04 01:01:42 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:42.843677 | orchestrator | 2026-02-04 01:01:42 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:01:42.847072 | orchestrator | 2026-02-04 01:01:42 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:42.847126 | orchestrator | 2026-02-04 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:45.897246 | orchestrator | 2026-02-04 01:01:45 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:45.899121 | orchestrator | 2026-02-04 01:01:45 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:01:45.900945 | orchestrator | 2026-02-04 01:01:45 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:45.900973 | orchestrator | 2026-02-04 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:48.939598 | orchestrator | 2026-02-04 01:01:48 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:48.943979 | orchestrator | 2026-02-04 01:01:48 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:01:48.944121 | orchestrator | 2026-02-04 01:01:48 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:48.944132 | orchestrator | 2026-02-04 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:51.985654 | orchestrator | 2026-02-04 01:01:51 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:51.987481 | orchestrator | 2026-02-04 01:01:51 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:01:51.991082 | orchestrator | 2026-02-04 01:01:51 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:51.991144 | orchestrator | 2026-02-04 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:55.046279 | orchestrator | 2026-02-04 01:01:55 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:55.048493 | orchestrator | 2026-02-04 01:01:55 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:01:55.050157 | orchestrator | 2026-02-04 01:01:55 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:55.050419 | orchestrator | 2026-02-04 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:58.120133 | orchestrator | 2026-02-04 01:01:58 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:01:58.122998 | orchestrator | 2026-02-04 01:01:58 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:01:58.125102 | orchestrator | 2026-02-04 01:01:58 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:01:58.125159 | orchestrator | 2026-02-04 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:01.173427 | orchestrator | 2026-02-04 01:02:01 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:02:01.175055 | orchestrator | 2026-02-04 01:02:01 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:01.177438 | orchestrator | 2026-02-04 01:02:01 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:01.177922 | orchestrator | 2026-02-04 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:04.231953 | orchestrator | 2026-02-04 01:02:04 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:02:04.233395 | orchestrator | 2026-02-04 01:02:04 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:04.236256 | orchestrator | 2026-02-04 01:02:04 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:04.236304 | orchestrator | 2026-02-04 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:07.284286 | orchestrator | 2026-02-04 01:02:07 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:02:07.287008 | orchestrator | 2026-02-04 01:02:07 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:07.289333 | orchestrator | 2026-02-04 01:02:07 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:07.289450 | orchestrator | 2026-02-04 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:10.339557 | orchestrator | 2026-02-04 01:02:10 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:02:10.341505 | orchestrator | 2026-02-04 01:02:10 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:10.343821 | orchestrator | 2026-02-04 01:02:10 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:10.343874 | orchestrator | 2026-02-04 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:13.384883 | orchestrator | 2026-02-04 01:02:13 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:02:13.385599 | orchestrator | 2026-02-04 01:02:13 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:13.386902 | orchestrator | 2026-02-04 01:02:13 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:13.387178 | orchestrator | 2026-02-04 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:16.442480 | orchestrator | 2026-02-04 01:02:16 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state STARTED 2026-02-04 01:02:16.445280 | orchestrator | 2026-02-04 01:02:16 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:16.451270 | orchestrator | 2026-02-04 01:02:16 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:16.451313 | orchestrator | 2026-02-04 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:19.507562 | orchestrator | 2026-02-04 01:02:19 | INFO  | Task c9fb5ae2-a449-45e6-a762-fb9d9f550899 is in state SUCCESS 2026-02-04 01:02:19.508937 | orchestrator | 2026-02-04 01:02:19.508975 | orchestrator | 2026-02-04 01:02:19.508990 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:02:19.508995 | orchestrator | 2026-02-04 01:02:19.508999 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:02:19.509003 | orchestrator | Wednesday 04 February 2026 00:59:45 +0000 (0:00:00.377) 0:00:00.378 **** 2026-02-04 01:02:19.509007 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:02:19.509012 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:02:19.509016 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:02:19.509020 | orchestrator | 2026-02-04 01:02:19.509024 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:02:19.509028 | orchestrator | Wednesday 04 February 2026 00:59:45 +0000 (0:00:00.511) 0:00:00.889 **** 2026-02-04 01:02:19.509032 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-04 01:02:19.509036 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-04 01:02:19.509040 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-04 01:02:19.509044 | orchestrator | 2026-02-04 01:02:19.509048 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-04 01:02:19.509052 | orchestrator | 2026-02-04 01:02:19.509056 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 01:02:19.509059 | orchestrator | Wednesday 04 February 2026 00:59:46 +0000 (0:00:00.606) 0:00:01.495 **** 2026-02-04 01:02:19.509064 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:02:19.509068 | orchestrator | 2026-02-04 01:02:19.509072 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-04 01:02:19.509075 | orchestrator | Wednesday 04 February 2026 00:59:46 +0000 (0:00:00.618) 0:00:02.114 **** 2026-02-04 01:02:19.509079 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 01:02:19.509083 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 01:02:19.509087 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 01:02:19.509091 | orchestrator | 2026-02-04 01:02:19.509095 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-04 01:02:19.509098 | orchestrator | Wednesday 04 February 2026 00:59:48 +0000 (0:00:01.857) 0:00:03.971 **** 2026-02-04 01:02:19.509104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509178 | orchestrator | 2026-02-04 01:02:19.509184 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 01:02:19.509190 | orchestrator | Wednesday 04 February 2026 00:59:50 +0000 (0:00:02.265) 0:00:06.237 **** 2026-02-04 01:02:19.509197 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:02:19.509203 | orchestrator | 2026-02-04 01:02:19.509209 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-04 01:02:19.509216 | orchestrator | Wednesday 04 February 2026 00:59:51 +0000 (0:00:00.888) 0:00:07.126 **** 2026-02-04 01:02:19.509231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509268 | orchestrator | 2026-02-04 01:02:19.509272 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-04 01:02:19.509276 | orchestrator | Wednesday 04 February 2026 00:59:55 +0000 (0:00:03.355) 0:00:10.481 **** 2026-02-04 01:02:19.509280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 01:02:19.509287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 01:02:19.509291 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:19.509295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 01:02:19.509304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 01:02:19.509309 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:19.509313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 01:02:19.509319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 01:02:19.509323 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:19.509327 | orchestrator | 2026-02-04 01:02:19.509331 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-04 01:02:19.509335 | orchestrator | Wednesday 04 February 2026 00:59:56 +0000 (0:00:01.559) 0:00:12.040 **** 2026-02-04 01:02:19.509339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 01:02:19.509348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 01:02:19.509353 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:19.509357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 01:02:19.509364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 01:02:19.509368 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:19.509372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 01:02:19.509381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 01:02:19.509385 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:19.509389 | orchestrator | 2026-02-04 01:02:19.509393 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-04 01:02:19.509397 | orchestrator | Wednesday 04 February 2026 00:59:57 +0000 (0:00:01.113) 0:00:13.154 **** 2026-02-04 01:02:19.509401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509441 | orchestrator | 2026-02-04 01:02:19.509444 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-04 01:02:19.509448 | orchestrator | Wednesday 04 February 2026 01:00:00 +0000 (0:00:03.094) 0:00:16.248 **** 2026-02-04 01:02:19.509452 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:02:19.509456 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:02:19.509460 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:02:19.509464 | orchestrator | 2026-02-04 01:02:19.509468 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-04 01:02:19.509472 | orchestrator | Wednesday 04 February 2026 01:00:04 +0000 (0:00:03.371) 0:00:19.619 **** 2026-02-04 01:02:19.509475 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:02:19.509479 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:02:19.509483 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:02:19.509487 | orchestrator | 2026-02-04 01:02:19.509491 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-04 01:02:19.509494 | orchestrator | Wednesday 04 February 2026 01:00:06 +0000 (0:00:02.672) 0:00:22.291 **** 2026-02-04 01:02:19.509498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 01:02:19.509518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 01:02:19.509539 | orchestrator | 2026-02-04 01:02:19.509545 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 01:02:19.509552 | orchestrator | Wednesday 04 February 2026 01:00:09 +0000 (0:00:02.753) 0:00:25.045 **** 2026-02-04 01:02:19.509558 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:19.509564 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:19.509570 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:19.509576 | orchestrator | 2026-02-04 01:02:19.509582 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 01:02:19.509588 | orchestrator | Wednesday 04 February 2026 01:00:10 +0000 (0:00:00.353) 0:00:25.399 **** 2026-02-04 01:02:19.509594 | orchestrator | 2026-02-04 01:02:19.509600 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 01:02:19.509606 | orchestrator | Wednesday 04 February 2026 01:00:10 +0000 (0:00:00.159) 0:00:25.558 **** 2026-02-04 01:02:19.509612 | orchestrator | 2026-02-04 01:02:19.509631 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 01:02:19.509638 | orchestrator | Wednesday 04 February 2026 01:00:10 +0000 (0:00:00.171) 0:00:25.730 **** 2026-02-04 01:02:19.509644 | orchestrator | 2026-02-04 01:02:19.509650 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-04 01:02:19.509657 | orchestrator | Wednesday 04 February 2026 01:00:10 +0000 (0:00:00.169) 0:00:25.899 **** 2026-02-04 01:02:19.509664 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:19.509671 | orchestrator | 2026-02-04 01:02:19.509677 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-04 01:02:19.509684 | orchestrator | Wednesday 04 February 2026 01:00:11 +0000 (0:00:01.218) 0:00:27.117 **** 2026-02-04 01:02:19.509688 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:19.509693 | orchestrator | 2026-02-04 01:02:19.509698 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-04 01:02:19.509702 | orchestrator | Wednesday 04 February 2026 01:00:12 +0000 (0:00:00.391) 0:00:27.509 **** 2026-02-04 01:02:19.509707 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:02:19.509711 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:02:19.509716 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:02:19.509721 | orchestrator | 2026-02-04 01:02:19.509725 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-04 01:02:19.509730 | orchestrator | Wednesday 04 February 2026 01:01:02 +0000 (0:00:49.996) 0:01:17.506 **** 2026-02-04 01:02:19.509734 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:02:19.509739 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:02:19.509743 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:02:19.509748 | orchestrator | 2026-02-04 01:02:19.509752 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 01:02:19.509757 | orchestrator | Wednesday 04 February 2026 01:02:04 +0000 (0:01:02.384) 0:02:19.890 **** 2026-02-04 01:02:19.509761 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:02:19.509766 | orchestrator | 2026-02-04 01:02:19.509770 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-04 01:02:19.509775 | orchestrator | Wednesday 04 February 2026 01:02:05 +0000 (0:00:00.760) 0:02:20.651 **** 2026-02-04 01:02:19.509779 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:02:19.509784 | orchestrator | 2026-02-04 01:02:19.509789 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-02-04 01:02:19.509793 | orchestrator | Wednesday 04 February 2026 01:02:07 +0000 (0:00:02.039) 0:02:22.691 **** 2026-02-04 01:02:19.509798 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:02:19.509803 | orchestrator | 2026-02-04 01:02:19.509807 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-04 01:02:19.509812 | orchestrator | Wednesday 04 February 2026 01:02:09 +0000 (0:00:01.880) 0:02:24.571 **** 2026-02-04 01:02:19.509816 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:02:19.509824 | orchestrator | 2026-02-04 01:02:19.509829 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-04 01:02:19.509833 | orchestrator | Wednesday 04 February 2026 01:02:11 +0000 (0:00:02.156) 0:02:26.727 **** 2026-02-04 01:02:19.509838 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:02:19.509842 | orchestrator | 2026-02-04 01:02:19.509847 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-04 01:02:19.509851 | orchestrator | Wednesday 04 February 2026 01:02:13 +0000 (0:00:02.429) 0:02:29.156 **** 2026-02-04 01:02:19.509856 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:02:19.509860 | orchestrator | 2026-02-04 01:02:19.509865 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:02:19.509870 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:02:19.509875 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 01:02:19.509886 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 01:02:19.509893 | orchestrator | 2026-02-04 01:02:19.509899 | orchestrator | 2026-02-04 01:02:19.509905 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:02:19.509912 | orchestrator | Wednesday 04 February 2026 01:02:16 +0000 (0:00:02.388) 0:02:31.544 **** 2026-02-04 01:02:19.509919 | orchestrator | =============================================================================== 2026-02-04 01:02:19.509925 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 62.38s 2026-02-04 01:02:19.509932 | orchestrator | opensearch : Restart opensearch container ------------------------------ 50.00s 2026-02-04 01:02:19.509939 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.37s 2026-02-04 01:02:19.509945 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.36s 2026-02-04 01:02:19.509951 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.09s 2026-02-04 01:02:19.509956 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.75s 2026-02-04 01:02:19.509963 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.67s 2026-02-04 01:02:19.509969 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.43s 2026-02-04 01:02:19.509976 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.39s 2026-02-04 01:02:19.509983 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.27s 2026-02-04 01:02:19.509990 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.16s 2026-02-04 01:02:19.509994 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.04s 2026-02-04 01:02:19.509998 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 1.88s 2026-02-04 01:02:19.510001 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.86s 2026-02-04 01:02:19.510005 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.56s 2026-02-04 01:02:19.510009 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 1.22s 2026-02-04 01:02:19.510056 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.11s 2026-02-04 01:02:19.510061 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.89s 2026-02-04 01:02:19.510065 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.76s 2026-02-04 01:02:19.510068 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-02-04 01:02:19.510688 | orchestrator | 2026-02-04 01:02:19 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:19.512327 | orchestrator | 2026-02-04 01:02:19 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:19.512350 | orchestrator | 2026-02-04 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:22.570229 | orchestrator | 2026-02-04 01:02:22 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:22.573155 | orchestrator | 2026-02-04 01:02:22 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:22.573373 | orchestrator | 2026-02-04 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:25.626589 | orchestrator | 2026-02-04 01:02:25 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:25.628799 | orchestrator | 2026-02-04 01:02:25 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:25.629344 | orchestrator | 2026-02-04 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:28.676719 | orchestrator | 2026-02-04 01:02:28 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:28.677902 | orchestrator | 2026-02-04 01:02:28 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:28.677954 | orchestrator | 2026-02-04 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:31.737144 | orchestrator | 2026-02-04 01:02:31 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:31.739232 | orchestrator | 2026-02-04 01:02:31 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:31.739310 | orchestrator | 2026-02-04 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:34.793716 | orchestrator | 2026-02-04 01:02:34 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:34.795704 | orchestrator | 2026-02-04 01:02:34 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:34.795764 | orchestrator | 2026-02-04 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:37.853844 | orchestrator | 2026-02-04 01:02:37 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:37.855381 | orchestrator | 2026-02-04 01:02:37 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:37.855441 | orchestrator | 2026-02-04 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:40.896581 | orchestrator | 2026-02-04 01:02:40 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:40.898922 | orchestrator | 2026-02-04 01:02:40 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:40.898950 | orchestrator | 2026-02-04 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:43.945579 | orchestrator | 2026-02-04 01:02:43 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:43.946941 | orchestrator | 2026-02-04 01:02:43 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:43.946978 | orchestrator | 2026-02-04 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:46.995715 | orchestrator | 2026-02-04 01:02:46 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:46.999013 | orchestrator | 2026-02-04 01:02:47 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:46.999099 | orchestrator | 2026-02-04 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:50.055789 | orchestrator | 2026-02-04 01:02:50 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:50.059364 | orchestrator | 2026-02-04 01:02:50 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:50.059642 | orchestrator | 2026-02-04 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:53.109322 | orchestrator | 2026-02-04 01:02:53 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:53.109657 | orchestrator | 2026-02-04 01:02:53 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:53.109739 | orchestrator | 2026-02-04 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:56.159466 | orchestrator | 2026-02-04 01:02:56 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:56.161582 | orchestrator | 2026-02-04 01:02:56 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:56.161644 | orchestrator | 2026-02-04 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:59.206296 | orchestrator | 2026-02-04 01:02:59 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:02:59.208065 | orchestrator | 2026-02-04 01:02:59 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:02:59.208246 | orchestrator | 2026-02-04 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:02.262834 | orchestrator | 2026-02-04 01:03:02 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:02.264334 | orchestrator | 2026-02-04 01:03:02 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state STARTED 2026-02-04 01:03:02.264970 | orchestrator | 2026-02-04 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:05.320550 | orchestrator | 2026-02-04 01:03:05 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:05.322192 | orchestrator | 2026-02-04 01:03:05 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:05.329895 | orchestrator | 2026-02-04 01:03:05.329958 | orchestrator | 2026-02-04 01:03:05 | INFO  | Task 8d355d97-e214-48f6-8e55-8a7667fb456c is in state SUCCESS 2026-02-04 01:03:05.332119 | orchestrator | 2026-02-04 01:03:05.332181 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-02-04 01:03:05.332192 | orchestrator | 2026-02-04 01:03:05.332198 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-04 01:03:05.332205 | orchestrator | Wednesday 04 February 2026 00:59:44 +0000 (0:00:00.117) 0:00:00.117 **** 2026-02-04 01:03:05.332212 | orchestrator | ok: [localhost] => { 2026-02-04 01:03:05.332221 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-02-04 01:03:05.332228 | orchestrator | } 2026-02-04 01:03:05.332235 | orchestrator | 2026-02-04 01:03:05.332241 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-02-04 01:03:05.332248 | orchestrator | Wednesday 04 February 2026 00:59:44 +0000 (0:00:00.075) 0:00:00.192 **** 2026-02-04 01:03:05.332254 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-02-04 01:03:05.332263 | orchestrator | ...ignoring 2026-02-04 01:03:05.332270 | orchestrator | 2026-02-04 01:03:05.332274 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-02-04 01:03:05.332290 | orchestrator | Wednesday 04 February 2026 00:59:48 +0000 (0:00:03.366) 0:00:03.559 **** 2026-02-04 01:03:05.332294 | orchestrator | skipping: [localhost] 2026-02-04 01:03:05.332298 | orchestrator | 2026-02-04 01:03:05.332302 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-02-04 01:03:05.332324 | orchestrator | Wednesday 04 February 2026 00:59:48 +0000 (0:00:00.088) 0:00:03.648 **** 2026-02-04 01:03:05.332328 | orchestrator | ok: [localhost] 2026-02-04 01:03:05.332331 | orchestrator | 2026-02-04 01:03:05.332335 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:03:05.332339 | orchestrator | 2026-02-04 01:03:05.332343 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:03:05.332347 | orchestrator | Wednesday 04 February 2026 00:59:48 +0000 (0:00:00.185) 0:00:03.833 **** 2026-02-04 01:03:05.332351 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:05.332355 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:03:05.332359 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:03:05.332362 | orchestrator | 2026-02-04 01:03:05.332366 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:03:05.332491 | orchestrator | Wednesday 04 February 2026 00:59:48 +0000 (0:00:00.343) 0:00:04.176 **** 2026-02-04 01:03:05.332498 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-04 01:03:05.332503 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-04 01:03:05.332506 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-04 01:03:05.332510 | orchestrator | 2026-02-04 01:03:05.332514 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-04 01:03:05.332518 | orchestrator | 2026-02-04 01:03:05.332522 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-04 01:03:05.332526 | orchestrator | Wednesday 04 February 2026 00:59:49 +0000 (0:00:00.787) 0:00:04.964 **** 2026-02-04 01:03:05.332530 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 01:03:05.332544 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 01:03:05.332548 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 01:03:05.332551 | orchestrator | 2026-02-04 01:03:05.332555 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 01:03:05.332559 | orchestrator | Wednesday 04 February 2026 00:59:50 +0000 (0:00:00.455) 0:00:05.420 **** 2026-02-04 01:03:05.332590 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:03:05.332595 | orchestrator | 2026-02-04 01:03:05.332599 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-04 01:03:05.332603 | orchestrator | Wednesday 04 February 2026 00:59:50 +0000 (0:00:00.632) 0:00:06.053 **** 2026-02-04 01:03:05.332622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 01:03:05.332639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 01:03:05.332645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 01:03:05.332650 | orchestrator | 2026-02-04 01:03:05.332860 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-04 01:03:05.332879 | orchestrator | Wednesday 04 February 2026 00:59:54 +0000 (0:00:04.058) 0:00:10.111 **** 2026-02-04 01:03:05.332883 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.332889 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.332893 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.332897 | orchestrator | 2026-02-04 01:03:05.332901 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-04 01:03:05.332905 | orchestrator | Wednesday 04 February 2026 00:59:55 +0000 (0:00:01.018) 0:00:11.130 **** 2026-02-04 01:03:05.332909 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.332913 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.332917 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.332920 | orchestrator | 2026-02-04 01:03:05.332924 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-04 01:03:05.332928 | orchestrator | Wednesday 04 February 2026 00:59:57 +0000 (0:00:01.806) 0:00:12.936 **** 2026-02-04 01:03:05.332939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 01:03:05.332949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 01:03:05.332962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 01:03:05.332966 | orchestrator | 2026-02-04 01:03:05.332970 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-04 01:03:05.332974 | orchestrator | Wednesday 04 February 2026 01:00:01 +0000 (0:00:04.036) 0:00:16.972 **** 2026-02-04 01:03:05.332978 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.332982 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.332986 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.332990 | orchestrator | 2026-02-04 01:03:05.332993 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-04 01:03:05.332997 | orchestrator | Wednesday 04 February 2026 01:00:03 +0000 (0:00:01.630) 0:00:18.603 **** 2026-02-04 01:03:05.333001 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.333005 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:03:05.333009 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:03:05.333013 | orchestrator | 2026-02-04 01:03:05.333017 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 01:03:05.333021 | orchestrator | Wednesday 04 February 2026 01:00:09 +0000 (0:00:05.894) 0:00:24.498 **** 2026-02-04 01:03:05.333025 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:03:05.333029 | orchestrator | 2026-02-04 01:03:05.333033 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-04 01:03:05.333037 | orchestrator | Wednesday 04 February 2026 01:00:09 +0000 (0:00:00.599) 0:00:25.097 **** 2026-02-04 01:03:05.333048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 01:03:05.333053 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:05.333060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 01:03:05.333064 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.333072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 01:03:05.333093 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.333101 | orchestrator | 2026-02-04 01:03:05.333110 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-04 01:03:05.333116 | orchestrator | Wednesday 04 February 2026 01:00:13 +0000 (0:00:03.737) 0:00:28.835 **** 2026-02-04 01:03:05.333126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 01:03:05.333132 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:05.333143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 01:03:05.333156 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.333166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 01:03:05.333172 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.333180 | orchestrator | 2026-02-04 01:03:05.333185 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-04 01:03:05.333192 | orchestrator | Wednesday 04 February 2026 01:00:18 +0000 (0:00:05.011) 0:00:33.846 **** 2026-02-04 01:03:05.333198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 01:03:05.333213 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.333230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 01:03:05.333235 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.333239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 01:03:05.333247 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:05.333250 | orchestrator | 2026-02-04 01:03:05.333254 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-04 01:03:05.333258 | orchestrator | Wednesday 04 February 2026 01:00:22 +0000 (0:00:03.516) 0:00:37.363 **** 2026-02-04 01:03:05.333269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 01:03:05.333274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 01:03:05.333288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 01:03:05.333293 | orchestrator | 2026-02-04 01:03:05.333297 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-04 01:03:05.333301 | orchestrator | Wednesday 04 February 2026 01:00:26 +0000 (0:00:04.825) 0:00:42.188 **** 2026-02-04 01:03:05.333305 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.333309 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:03:05.333312 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:03:05.333316 | orchestrator | 2026-02-04 01:03:05.333320 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-04 01:03:05.333324 | orchestrator | Wednesday 04 February 2026 01:00:27 +0000 (0:00:00.848) 0:00:43.037 **** 2026-02-04 01:03:05.333328 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:05.333332 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:03:05.333336 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:03:05.333340 | orchestrator | 2026-02-04 01:03:05.333343 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-04 01:03:05.333347 | orchestrator | Wednesday 04 February 2026 01:00:28 +0000 (0:00:00.535) 0:00:43.572 **** 2026-02-04 01:03:05.333354 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:05.333359 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:03:05.333362 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:03:05.333366 | orchestrator | 2026-02-04 01:03:05.333370 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-04 01:03:05.333374 | orchestrator | Wednesday 04 February 2026 01:00:28 +0000 (0:00:00.329) 0:00:43.901 **** 2026-02-04 01:03:05.333379 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-04 01:03:05.333383 | orchestrator | ...ignoring 2026-02-04 01:03:05.333387 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-04 01:03:05.333391 | orchestrator | ...ignoring 2026-02-04 01:03:05.333395 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-04 01:03:05.333399 | orchestrator | ...ignoring 2026-02-04 01:03:05.333402 | orchestrator | 2026-02-04 01:03:05.333406 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-04 01:03:05.333410 | orchestrator | Wednesday 04 February 2026 01:00:39 +0000 (0:00:10.938) 0:00:54.840 **** 2026-02-04 01:03:05.333414 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:05.333419 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:03:05.333425 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:03:05.333445 | orchestrator | 2026-02-04 01:03:05.333451 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-04 01:03:05.333457 | orchestrator | Wednesday 04 February 2026 01:00:40 +0000 (0:00:00.543) 0:00:55.383 **** 2026-02-04 01:03:05.333462 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:05.333468 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.333474 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.333480 | orchestrator | 2026-02-04 01:03:05.333486 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-04 01:03:05.333492 | orchestrator | Wednesday 04 February 2026 01:00:40 +0000 (0:00:00.762) 0:00:56.146 **** 2026-02-04 01:03:05.333498 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:05.333504 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.333511 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.333517 | orchestrator | 2026-02-04 01:03:05.333524 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-04 01:03:05.333531 | orchestrator | Wednesday 04 February 2026 01:00:41 +0000 (0:00:00.611) 0:00:56.757 **** 2026-02-04 01:03:05.333537 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:05.333542 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.333546 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.333551 | orchestrator | 2026-02-04 01:03:05.333556 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-04 01:03:05.333560 | orchestrator | Wednesday 04 February 2026 01:00:42 +0000 (0:00:00.481) 0:00:57.238 **** 2026-02-04 01:03:05.333565 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:05.333570 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:03:05.333574 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:03:05.333579 | orchestrator | 2026-02-04 01:03:05.333583 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-04 01:03:05.333588 | orchestrator | Wednesday 04 February 2026 01:00:42 +0000 (0:00:00.438) 0:00:57.676 **** 2026-02-04 01:03:05.333596 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:05.333602 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.333609 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.333625 | orchestrator | 2026-02-04 01:03:05.333632 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 01:03:05.333638 | orchestrator | Wednesday 04 February 2026 01:00:43 +0000 (0:00:00.782) 0:00:58.459 **** 2026-02-04 01:03:05.333650 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.333656 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.333663 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-04 01:03:05.333669 | orchestrator | 2026-02-04 01:03:05.333674 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-04 01:03:05.333680 | orchestrator | Wednesday 04 February 2026 01:00:43 +0000 (0:00:00.401) 0:00:58.860 **** 2026-02-04 01:03:05.333687 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.333714 | orchestrator | 2026-02-04 01:03:05.333722 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-04 01:03:05.333729 | orchestrator | Wednesday 04 February 2026 01:00:54 +0000 (0:00:10.365) 0:01:09.226 **** 2026-02-04 01:03:05.333735 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:05.333741 | orchestrator | 2026-02-04 01:03:05.333752 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 01:03:05.333758 | orchestrator | Wednesday 04 February 2026 01:00:54 +0000 (0:00:00.137) 0:01:09.364 **** 2026-02-04 01:03:05.333765 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:05.333771 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.333786 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.333793 | orchestrator | 2026-02-04 01:03:05.333799 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-04 01:03:05.333804 | orchestrator | Wednesday 04 February 2026 01:00:55 +0000 (0:00:01.194) 0:01:10.558 **** 2026-02-04 01:03:05.333810 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.333816 | orchestrator | 2026-02-04 01:03:05.333822 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-04 01:03:05.333828 | orchestrator | Wednesday 04 February 2026 01:01:04 +0000 (0:00:08.980) 0:01:19.539 **** 2026-02-04 01:03:05.333834 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2026-02-04 01:03:05.333841 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:05.333847 | orchestrator | 2026-02-04 01:03:05.333853 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-04 01:03:05.333860 | orchestrator | Wednesday 04 February 2026 01:01:11 +0000 (0:00:07.350) 0:01:26.889 **** 2026-02-04 01:03:05.333866 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:05.333872 | orchestrator | 2026-02-04 01:03:05.333878 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-04 01:03:05.333883 | orchestrator | Wednesday 04 February 2026 01:01:14 +0000 (0:00:02.596) 0:01:29.485 **** 2026-02-04 01:03:05.333888 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.333894 | orchestrator | 2026-02-04 01:03:05.333900 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-04 01:03:05.333906 | orchestrator | Wednesday 04 February 2026 01:01:14 +0000 (0:00:00.117) 0:01:29.603 **** 2026-02-04 01:03:05.333913 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:05.333919 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.333925 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.333931 | orchestrator | 2026-02-04 01:03:05.333938 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-04 01:03:05.333944 | orchestrator | Wednesday 04 February 2026 01:01:14 +0000 (0:00:00.387) 0:01:29.991 **** 2026-02-04 01:03:05.333950 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:05.333957 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:03:05.333963 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:03:05.333970 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-04 01:03:05.333977 | orchestrator | 2026-02-04 01:03:05.333983 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-04 01:03:05.333989 | orchestrator | skipping: no hosts matched 2026-02-04 01:03:05.334091 | orchestrator | 2026-02-04 01:03:05.334098 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-04 01:03:05.334112 | orchestrator | 2026-02-04 01:03:05.334119 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 01:03:05.334126 | orchestrator | Wednesday 04 February 2026 01:01:15 +0000 (0:00:00.672) 0:01:30.664 **** 2026-02-04 01:03:05.334132 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:03:05.334136 | orchestrator | 2026-02-04 01:03:05.334139 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 01:03:05.334144 | orchestrator | Wednesday 04 February 2026 01:01:32 +0000 (0:00:17.426) 0:01:48.091 **** 2026-02-04 01:03:05.334148 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:03:05.334152 | orchestrator | 2026-02-04 01:03:05.334155 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 01:03:05.334159 | orchestrator | Wednesday 04 February 2026 01:01:48 +0000 (0:00:15.567) 0:02:03.658 **** 2026-02-04 01:03:05.334163 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:03:05.334167 | orchestrator | 2026-02-04 01:03:05.334171 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-04 01:03:05.334175 | orchestrator | 2026-02-04 01:03:05.334179 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 01:03:05.334183 | orchestrator | Wednesday 04 February 2026 01:01:51 +0000 (0:00:02.644) 0:02:06.302 **** 2026-02-04 01:03:05.334187 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:03:05.334191 | orchestrator | 2026-02-04 01:03:05.334194 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 01:03:05.334198 | orchestrator | Wednesday 04 February 2026 01:02:12 +0000 (0:00:20.967) 0:02:27.270 **** 2026-02-04 01:03:05.334202 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:03:05.334206 | orchestrator | 2026-02-04 01:03:05.334210 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 01:03:05.334214 | orchestrator | Wednesday 04 February 2026 01:02:21 +0000 (0:00:09.631) 0:02:36.901 **** 2026-02-04 01:03:05.334227 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:03:05.334233 | orchestrator | 2026-02-04 01:03:05.334240 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-04 01:03:05.334249 | orchestrator | 2026-02-04 01:03:05.334255 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 01:03:05.334261 | orchestrator | Wednesday 04 February 2026 01:02:24 +0000 (0:00:03.148) 0:02:40.050 **** 2026-02-04 01:03:05.334267 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.334273 | orchestrator | 2026-02-04 01:03:05.334278 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 01:03:05.334284 | orchestrator | Wednesday 04 February 2026 01:02:39 +0000 (0:00:15.005) 0:02:55.056 **** 2026-02-04 01:03:05.334290 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:05.334295 | orchestrator | 2026-02-04 01:03:05.334302 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 01:03:05.334307 | orchestrator | Wednesday 04 February 2026 01:02:44 +0000 (0:00:04.672) 0:02:59.728 **** 2026-02-04 01:03:05.334313 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:05.334319 | orchestrator | 2026-02-04 01:03:05.334325 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-04 01:03:05.334331 | orchestrator | 2026-02-04 01:03:05.334343 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-04 01:03:05.334349 | orchestrator | Wednesday 04 February 2026 01:02:47 +0000 (0:00:03.144) 0:03:02.872 **** 2026-02-04 01:03:05.334355 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:03:05.334361 | orchestrator | 2026-02-04 01:03:05.334368 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-04 01:03:05.334374 | orchestrator | Wednesday 04 February 2026 01:02:48 +0000 (0:00:00.557) 0:03:03.429 **** 2026-02-04 01:03:05.334377 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.334381 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.334385 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.334394 | orchestrator | 2026-02-04 01:03:05.334398 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-04 01:03:05.334402 | orchestrator | Wednesday 04 February 2026 01:02:50 +0000 (0:00:02.198) 0:03:05.627 **** 2026-02-04 01:03:05.334406 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.334409 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.334413 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.334417 | orchestrator | 2026-02-04 01:03:05.334421 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-04 01:03:05.334425 | orchestrator | Wednesday 04 February 2026 01:02:52 +0000 (0:00:02.012) 0:03:07.640 **** 2026-02-04 01:03:05.334429 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.334432 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.334436 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.334440 | orchestrator | 2026-02-04 01:03:05.334444 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-04 01:03:05.334448 | orchestrator | Wednesday 04 February 2026 01:02:55 +0000 (0:00:02.791) 0:03:10.432 **** 2026-02-04 01:03:05.334452 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.334455 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.334459 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:05.334463 | orchestrator | 2026-02-04 01:03:05.334467 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-04 01:03:05.334471 | orchestrator | Wednesday 04 February 2026 01:02:57 +0000 (0:00:02.734) 0:03:13.166 **** 2026-02-04 01:03:05.334474 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:05.334478 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:03:05.334482 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:03:05.334489 | orchestrator | 2026-02-04 01:03:05.334496 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-04 01:03:05.334505 | orchestrator | Wednesday 04 February 2026 01:03:01 +0000 (0:00:03.745) 0:03:16.911 **** 2026-02-04 01:03:05.334511 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:05.334517 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:05.334522 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:05.334528 | orchestrator | 2026-02-04 01:03:05.334534 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:03:05.334541 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-04 01:03:05.334549 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-04 01:03:05.334556 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-04 01:03:05.334562 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-04 01:03:05.334568 | orchestrator | 2026-02-04 01:03:05.334575 | orchestrator | 2026-02-04 01:03:05.334582 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:03:05.334586 | orchestrator | Wednesday 04 February 2026 01:03:01 +0000 (0:00:00.285) 0:03:17.197 **** 2026-02-04 01:03:05.334590 | orchestrator | =============================================================================== 2026-02-04 01:03:05.334594 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.39s 2026-02-04 01:03:05.334597 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 25.20s 2026-02-04 01:03:05.334601 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.01s 2026-02-04 01:03:05.334605 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.94s 2026-02-04 01:03:05.334609 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.37s 2026-02-04 01:03:05.334622 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.98s 2026-02-04 01:03:05.334626 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.35s 2026-02-04 01:03:05.334630 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.90s 2026-02-04 01:03:05.334634 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.79s 2026-02-04 01:03:05.334638 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 5.01s 2026-02-04 01:03:05.334641 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.83s 2026-02-04 01:03:05.334645 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.67s 2026-02-04 01:03:05.334649 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.06s 2026-02-04 01:03:05.334653 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.04s 2026-02-04 01:03:05.334657 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.75s 2026-02-04 01:03:05.334664 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.74s 2026-02-04 01:03:05.334668 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.52s 2026-02-04 01:03:05.334672 | orchestrator | Check MariaDB service --------------------------------------------------- 3.37s 2026-02-04 01:03:05.334677 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.14s 2026-02-04 01:03:05.334684 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.79s 2026-02-04 01:03:05.334690 | orchestrator | 2026-02-04 01:03:05 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:05.334822 | orchestrator | 2026-02-04 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:08.386592 | orchestrator | 2026-02-04 01:03:08 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:08.386897 | orchestrator | 2026-02-04 01:03:08 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:08.387989 | orchestrator | 2026-02-04 01:03:08 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:08.388033 | orchestrator | 2026-02-04 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:11.433669 | orchestrator | 2026-02-04 01:03:11 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:11.435445 | orchestrator | 2026-02-04 01:03:11 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:11.440078 | orchestrator | 2026-02-04 01:03:11 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:11.440934 | orchestrator | 2026-02-04 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:14.479152 | orchestrator | 2026-02-04 01:03:14 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:14.481044 | orchestrator | 2026-02-04 01:03:14 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:14.482448 | orchestrator | 2026-02-04 01:03:14 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:14.482509 | orchestrator | 2026-02-04 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:17.529308 | orchestrator | 2026-02-04 01:03:17 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:17.532176 | orchestrator | 2026-02-04 01:03:17 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:17.536047 | orchestrator | 2026-02-04 01:03:17 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:17.536172 | orchestrator | 2026-02-04 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:20.583873 | orchestrator | 2026-02-04 01:03:20 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:20.583920 | orchestrator | 2026-02-04 01:03:20 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:20.585025 | orchestrator | 2026-02-04 01:03:20 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:20.585068 | orchestrator | 2026-02-04 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:23.632044 | orchestrator | 2026-02-04 01:03:23 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:23.634541 | orchestrator | 2026-02-04 01:03:23 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:23.637915 | orchestrator | 2026-02-04 01:03:23 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:23.637966 | orchestrator | 2026-02-04 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:26.677065 | orchestrator | 2026-02-04 01:03:26 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:26.677802 | orchestrator | 2026-02-04 01:03:26 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:26.679187 | orchestrator | 2026-02-04 01:03:26 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:26.679222 | orchestrator | 2026-02-04 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:29.718349 | orchestrator | 2026-02-04 01:03:29 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:29.719321 | orchestrator | 2026-02-04 01:03:29 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:29.722446 | orchestrator | 2026-02-04 01:03:29 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:29.722504 | orchestrator | 2026-02-04 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:32.773288 | orchestrator | 2026-02-04 01:03:32 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:32.774068 | orchestrator | 2026-02-04 01:03:32 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:32.775434 | orchestrator | 2026-02-04 01:03:32 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:32.775477 | orchestrator | 2026-02-04 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:35.815032 | orchestrator | 2026-02-04 01:03:35 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:35.815404 | orchestrator | 2026-02-04 01:03:35 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:35.817159 | orchestrator | 2026-02-04 01:03:35 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:35.817267 | orchestrator | 2026-02-04 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:38.854193 | orchestrator | 2026-02-04 01:03:38 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:38.857794 | orchestrator | 2026-02-04 01:03:38 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:38.858944 | orchestrator | 2026-02-04 01:03:38 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:38.859011 | orchestrator | 2026-02-04 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:41.904978 | orchestrator | 2026-02-04 01:03:41 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:41.905521 | orchestrator | 2026-02-04 01:03:41 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:41.907069 | orchestrator | 2026-02-04 01:03:41 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:41.907160 | orchestrator | 2026-02-04 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:44.967550 | orchestrator | 2026-02-04 01:03:44 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state STARTED 2026-02-04 01:03:44.969100 | orchestrator | 2026-02-04 01:03:44 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:44.971163 | orchestrator | 2026-02-04 01:03:44 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:44.971211 | orchestrator | 2026-02-04 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:48.025601 | orchestrator | 2026-02-04 01:03:48 | INFO  | Task b5bd28da-733a-431a-8597-2634ace4d989 is in state SUCCESS 2026-02-04 01:03:48.031669 | orchestrator | 2026-02-04 01:03:48.031788 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 01:03:48.031802 | orchestrator | 2.16.14 2026-02-04 01:03:48.031810 | orchestrator | 2026-02-04 01:03:48.031817 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-04 01:03:48.031825 | orchestrator | 2026-02-04 01:03:48.031831 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-04 01:03:48.031838 | orchestrator | Wednesday 04 February 2026 01:01:34 +0000 (0:00:00.627) 0:00:00.627 **** 2026-02-04 01:03:48.031845 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:03:48.031853 | orchestrator | 2026-02-04 01:03:48.031859 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-04 01:03:48.031866 | orchestrator | Wednesday 04 February 2026 01:01:35 +0000 (0:00:00.730) 0:00:01.358 **** 2026-02-04 01:03:48.031873 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.031881 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.031889 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.031895 | orchestrator | 2026-02-04 01:03:48.031902 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-04 01:03:48.031908 | orchestrator | Wednesday 04 February 2026 01:01:36 +0000 (0:00:00.712) 0:00:02.071 **** 2026-02-04 01:03:48.031915 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.031922 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.031929 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.031935 | orchestrator | 2026-02-04 01:03:48.031941 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 01:03:48.032024 | orchestrator | Wednesday 04 February 2026 01:01:36 +0000 (0:00:00.338) 0:00:02.409 **** 2026-02-04 01:03:48.032032 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.032038 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.032044 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.032049 | orchestrator | 2026-02-04 01:03:48.032055 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 01:03:48.032061 | orchestrator | Wednesday 04 February 2026 01:01:37 +0000 (0:00:00.929) 0:00:03.338 **** 2026-02-04 01:03:48.032068 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.032302 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.032311 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.032317 | orchestrator | 2026-02-04 01:03:48.032323 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-04 01:03:48.032344 | orchestrator | Wednesday 04 February 2026 01:01:37 +0000 (0:00:00.351) 0:00:03.689 **** 2026-02-04 01:03:48.032350 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.032380 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.032387 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.032393 | orchestrator | 2026-02-04 01:03:48.032399 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-04 01:03:48.032406 | orchestrator | Wednesday 04 February 2026 01:01:38 +0000 (0:00:00.331) 0:00:04.020 **** 2026-02-04 01:03:48.032412 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.032419 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.032424 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.032430 | orchestrator | 2026-02-04 01:03:48.032436 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-04 01:03:48.032443 | orchestrator | Wednesday 04 February 2026 01:01:38 +0000 (0:00:00.364) 0:00:04.385 **** 2026-02-04 01:03:48.032449 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.032456 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.032462 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.032468 | orchestrator | 2026-02-04 01:03:48.032474 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-04 01:03:48.032480 | orchestrator | Wednesday 04 February 2026 01:01:39 +0000 (0:00:00.587) 0:00:04.973 **** 2026-02-04 01:03:48.032486 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.032493 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.032498 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.032504 | orchestrator | 2026-02-04 01:03:48.032510 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-04 01:03:48.032516 | orchestrator | Wednesday 04 February 2026 01:01:39 +0000 (0:00:00.335) 0:00:05.308 **** 2026-02-04 01:03:48.032522 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 01:03:48.032529 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 01:03:48.032534 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 01:03:48.032540 | orchestrator | 2026-02-04 01:03:48.032547 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-04 01:03:48.032553 | orchestrator | Wednesday 04 February 2026 01:01:40 +0000 (0:00:00.631) 0:00:05.939 **** 2026-02-04 01:03:48.032559 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.032565 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.032571 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.032576 | orchestrator | 2026-02-04 01:03:48.032583 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-04 01:03:48.032588 | orchestrator | Wednesday 04 February 2026 01:01:40 +0000 (0:00:00.459) 0:00:06.398 **** 2026-02-04 01:03:48.032595 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 01:03:48.032601 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 01:03:48.032607 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 01:03:48.032613 | orchestrator | 2026-02-04 01:03:48.032619 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-04 01:03:48.032660 | orchestrator | Wednesday 04 February 2026 01:01:42 +0000 (0:00:02.128) 0:00:08.527 **** 2026-02-04 01:03:48.032670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 01:03:48.032676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 01:03:48.032726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 01:03:48.032734 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.032740 | orchestrator | 2026-02-04 01:03:48.033022 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-04 01:03:48.033045 | orchestrator | Wednesday 04 February 2026 01:01:43 +0000 (0:00:00.705) 0:00:09.232 **** 2026-02-04 01:03:48.033053 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.033075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.033082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.033088 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033094 | orchestrator | 2026-02-04 01:03:48.033100 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-04 01:03:48.033106 | orchestrator | Wednesday 04 February 2026 01:01:44 +0000 (0:00:00.877) 0:00:10.110 **** 2026-02-04 01:03:48.033116 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.033133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.033140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.033146 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033152 | orchestrator | 2026-02-04 01:03:48.033157 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-04 01:03:48.033164 | orchestrator | Wednesday 04 February 2026 01:01:44 +0000 (0:00:00.392) 0:00:10.502 **** 2026-02-04 01:03:48.033173 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '891bdafbed33', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-04 01:01:41.349642', 'end': '2026-02-04 01:01:41.377051', 'delta': '0:00:00.027409', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['891bdafbed33'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-04 01:03:48.033182 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6312d60ae9ab', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-04 01:01:42.126519', 'end': '2026-02-04 01:01:42.164328', 'delta': '0:00:00.037809', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6312d60ae9ab'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-04 01:03:48.033222 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6d8107b723a1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-04 01:01:42.632737', 'end': '2026-02-04 01:01:42.661157', 'delta': '0:00:00.028420', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6d8107b723a1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-04 01:03:48.033230 | orchestrator | 2026-02-04 01:03:48.033237 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-04 01:03:48.033243 | orchestrator | Wednesday 04 February 2026 01:01:44 +0000 (0:00:00.215) 0:00:10.718 **** 2026-02-04 01:03:48.033250 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.033257 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.033264 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.033270 | orchestrator | 2026-02-04 01:03:48.033275 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-04 01:03:48.033281 | orchestrator | Wednesday 04 February 2026 01:01:45 +0000 (0:00:00.535) 0:00:11.253 **** 2026-02-04 01:03:48.033288 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-04 01:03:48.033293 | orchestrator | 2026-02-04 01:03:48.033299 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-04 01:03:48.033305 | orchestrator | Wednesday 04 February 2026 01:01:47 +0000 (0:00:01.645) 0:00:12.899 **** 2026-02-04 01:03:48.033311 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033317 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.033326 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.033336 | orchestrator | 2026-02-04 01:03:48.033342 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-04 01:03:48.033352 | orchestrator | Wednesday 04 February 2026 01:01:47 +0000 (0:00:00.335) 0:00:13.235 **** 2026-02-04 01:03:48.033359 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033365 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.033371 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.033377 | orchestrator | 2026-02-04 01:03:48.033383 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 01:03:48.033390 | orchestrator | Wednesday 04 February 2026 01:01:47 +0000 (0:00:00.425) 0:00:13.660 **** 2026-02-04 01:03:48.033395 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033401 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.033407 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.033413 | orchestrator | 2026-02-04 01:03:48.033419 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-04 01:03:48.033425 | orchestrator | Wednesday 04 February 2026 01:01:48 +0000 (0:00:00.605) 0:00:14.266 **** 2026-02-04 01:03:48.033431 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.033436 | orchestrator | 2026-02-04 01:03:48.033442 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-04 01:03:48.033448 | orchestrator | Wednesday 04 February 2026 01:01:48 +0000 (0:00:00.145) 0:00:14.411 **** 2026-02-04 01:03:48.033454 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033460 | orchestrator | 2026-02-04 01:03:48.033466 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 01:03:48.033472 | orchestrator | Wednesday 04 February 2026 01:01:48 +0000 (0:00:00.264) 0:00:14.676 **** 2026-02-04 01:03:48.033477 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033483 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.033489 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.033508 | orchestrator | 2026-02-04 01:03:48.033514 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-04 01:03:48.033520 | orchestrator | Wednesday 04 February 2026 01:01:49 +0000 (0:00:00.343) 0:00:15.020 **** 2026-02-04 01:03:48.033525 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033531 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.033538 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.033544 | orchestrator | 2026-02-04 01:03:48.033549 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-04 01:03:48.033555 | orchestrator | Wednesday 04 February 2026 01:01:49 +0000 (0:00:00.332) 0:00:15.352 **** 2026-02-04 01:03:48.033561 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033567 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.033573 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.033579 | orchestrator | 2026-02-04 01:03:48.033585 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-04 01:03:48.033591 | orchestrator | Wednesday 04 February 2026 01:01:50 +0000 (0:00:00.593) 0:00:15.945 **** 2026-02-04 01:03:48.033597 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033604 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.033612 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.033620 | orchestrator | 2026-02-04 01:03:48.033627 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-04 01:03:48.033634 | orchestrator | Wednesday 04 February 2026 01:01:50 +0000 (0:00:00.420) 0:00:16.365 **** 2026-02-04 01:03:48.033641 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033648 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.033654 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.033660 | orchestrator | 2026-02-04 01:03:48.033667 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-04 01:03:48.033675 | orchestrator | Wednesday 04 February 2026 01:01:50 +0000 (0:00:00.372) 0:00:16.738 **** 2026-02-04 01:03:48.033683 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033691 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.033698 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.033735 | orchestrator | 2026-02-04 01:03:48.033742 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-04 01:03:48.033749 | orchestrator | Wednesday 04 February 2026 01:01:51 +0000 (0:00:00.332) 0:00:17.070 **** 2026-02-04 01:03:48.033755 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.033782 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.033788 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.033795 | orchestrator | 2026-02-04 01:03:48.033802 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-04 01:03:48.033810 | orchestrator | Wednesday 04 February 2026 01:01:51 +0000 (0:00:00.554) 0:00:17.625 **** 2026-02-04 01:03:48.033820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cab1220b--9ff6--5009--b197--fa753e4036d2-osd--block--cab1220b--9ff6--5009--b197--fa753e4036d2', 'dm-uuid-LVM-i1ir8cW1PvWS9XJjL7rtGfPs74IrwS1OtXRgctxodwlzbnYu05YC6ITqVCjt3Ewp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4adee4b4--d62b--5502--a742--8ac6c3138b01-osd--block--4adee4b4--d62b--5502--a742--8ac6c3138b01', 'dm-uuid-LVM-SU4etYSpWEq0QUIDoovTGPho7gvfQS4CfqyRGhgWRYEgUBuM6qPphK1xLHiYiX7n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6cd3944c--50dd--590e--9699--94e09e9b1959-osd--block--6cd3944c--50dd--590e--9699--94e09e9b1959', 'dm-uuid-LVM-XgXlScZWWizuOO8Naf9sj1Y6ACIIQX3P3IkOOLP61fS3F3tEGJHex2gC82E55wff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--197bc0b1--bda8--5def--b850--786176b935dd-osd--block--197bc0b1--bda8--5def--b850--786176b935dd', 'dm-uuid-LVM-nemC0iKe6zNA0EcGvn8fmzYHB56fDHxebgnqpGfAfDUnXsi33ExeGzK6cfU0FmVZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.033984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.033992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cab1220b--9ff6--5009--b197--fa753e4036d2-osd--block--cab1220b--9ff6--5009--b197--fa753e4036d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rJP7yo-d0Io-2Sbh-p8jO-QRbP-JI2P-SK5YlT', 'scsi-0QEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52', 'scsi-SQEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4adee4b4--d62b--5502--a742--8ac6c3138b01-osd--block--4adee4b4--d62b--5502--a742--8ac6c3138b01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AbG5Ab-g41T-U6Ls-d9pt-UBR4-ZKCx-x9UiyH', 'scsi-0QEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd', 'scsi-SQEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089', 'scsi-SQEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034170 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.034178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3daecb5--9fd0--5834--b191--078d341d10dc-osd--block--e3daecb5--9fd0--5834--b191--078d341d10dc', 'dm-uuid-LVM-b0VKYwSqdivqaHauLtP9AoYkjSg3Qhd1ajk3GtVp2Q0TYOfGU3ZcyDXlPdU0pGPI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part1', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part14', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part15', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part16', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--607d890d--3e41--57a1--9874--83b389fa50fb-osd--block--607d890d--3e41--57a1--9874--83b389fa50fb', 'dm-uuid-LVM-tcfEEFY9BrwSTyQheLvKc5mjGSniqt7Qw1sChWus7fPQM1wJdmFmQzYM75n7njop'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6cd3944c--50dd--590e--9699--94e09e9b1959-osd--block--6cd3944c--50dd--590e--9699--94e09e9b1959'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-niJgie-P4tu-prGp-syH5-mr1x-Ue9N-Xoxej0', 'scsi-0QEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d', 'scsi-SQEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--197bc0b1--bda8--5def--b850--786176b935dd-osd--block--197bc0b1--bda8--5def--b850--786176b935dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4zvJig-R9CO-DeWu-dQTC-OC2s-SDYV-99Ae0P', 'scsi-0QEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d', 'scsi-SQEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4', 'scsi-SQEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034308 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.034319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 01:03:48.034361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part1', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part14', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part15', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part16', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e3daecb5--9fd0--5834--b191--078d341d10dc-osd--block--e3daecb5--9fd0--5834--b191--078d341d10dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YLAFzg-kJmY-aUic-VBuH-g3uH-9m4L-hSfk98', 'scsi-0QEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08', 'scsi-SQEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--607d890d--3e41--57a1--9874--83b389fa50fb-osd--block--607d890d--3e41--57a1--9874--83b389fa50fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pScOt8-ITDZ-tXnq-6HO2-2rSN-88m4-bhVjVu', 'scsi-0QEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508', 'scsi-SQEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89', 'scsi-SQEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 01:03:48.034425 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.034433 | orchestrator | 2026-02-04 01:03:48.034440 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-04 01:03:48.034448 | orchestrator | Wednesday 04 February 2026 01:01:52 +0000 (0:00:00.650) 0:00:18.276 **** 2026-02-04 01:03:48.034457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cab1220b--9ff6--5009--b197--fa753e4036d2-osd--block--cab1220b--9ff6--5009--b197--fa753e4036d2', 'dm-uuid-LVM-i1ir8cW1PvWS9XJjL7rtGfPs74IrwS1OtXRgctxodwlzbnYu05YC6ITqVCjt3Ewp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4adee4b4--d62b--5502--a742--8ac6c3138b01-osd--block--4adee4b4--d62b--5502--a742--8ac6c3138b01', 'dm-uuid-LVM-SU4etYSpWEq0QUIDoovTGPho7gvfQS4CfqyRGhgWRYEgUBuM6qPphK1xLHiYiX7n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034478 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034507 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034530 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034545 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6cd3944c--50dd--590e--9699--94e09e9b1959-osd--block--6cd3944c--50dd--590e--9699--94e09e9b1959', 'dm-uuid-LVM-XgXlScZWWizuOO8Naf9sj1Y6ACIIQX3P3IkOOLP61fS3F3tEGJHex2gC82E55wff'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034552 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034569 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_80bce2bb-e18d-4255-9d30-172ea54b11f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034583 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--197bc0b1--bda8--5def--b850--786176b935dd-osd--block--197bc0b1--bda8--5def--b850--786176b935dd', 'dm-uuid-LVM-nemC0iKe6zNA0EcGvn8fmzYHB56fDHxebgnqpGfAfDUnXsi33ExeGzK6cfU0FmVZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034591 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cab1220b--9ff6--5009--b197--fa753e4036d2-osd--block--cab1220b--9ff6--5009--b197--fa753e4036d2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rJP7yo-d0Io-2Sbh-p8jO-QRbP-JI2P-SK5YlT', 'scsi-0QEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52', 'scsi-SQEMU_QEMU_HARDDISK_03b06afa-7fea-4d7e-bf2e-7215727f5f52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034603 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4adee4b4--d62b--5502--a742--8ac6c3138b01-osd--block--4adee4b4--d62b--5502--a742--8ac6c3138b01'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AbG5Ab-g41T-U6Ls-d9pt-UBR4-ZKCx-x9UiyH', 'scsi-0QEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd', 'scsi-SQEMU_QEMU_HARDDISK_54fed4a3-dd06-43ea-9731-a81abbed62bd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034626 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034633 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089', 'scsi-SQEMU_QEMU_HARDDISK_fd9a253f-e742-4747-9193-aa6fcde93089'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034640 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034665 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034673 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.034681 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034688 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034698 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034717 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part1', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part14', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part15', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part16', 'scsi-SQEMU_QEMU_HARDDISK_a371d88b-21aa-46d0-9a00-c59fe370106e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034734 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6cd3944c--50dd--590e--9699--94e09e9b1959-osd--block--6cd3944c--50dd--590e--9699--94e09e9b1959'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-niJgie-P4tu-prGp-syH5-mr1x-Ue9N-Xoxej0', 'scsi-0QEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d', 'scsi-SQEMU_QEMU_HARDDISK_27db8536-d7cf-467f-b2b6-f0129584608d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034741 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--197bc0b1--bda8--5def--b850--786176b935dd-osd--block--197bc0b1--bda8--5def--b850--786176b935dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4zvJig-R9CO-DeWu-dQTC-OC2s-SDYV-99Ae0P', 'scsi-0QEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d', 'scsi-SQEMU_QEMU_HARDDISK_d0af9621-3ff0-4b28-b816-705c9ef71a8d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3daecb5--9fd0--5834--b191--078d341d10dc-osd--block--e3daecb5--9fd0--5834--b191--078d341d10dc', 'dm-uuid-LVM-b0VKYwSqdivqaHauLtP9AoYkjSg3Qhd1ajk3GtVp2Q0TYOfGU3ZcyDXlPdU0pGPI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4', 'scsi-SQEMU_QEMU_HARDDISK_092c1f4e-b194-45a0-a7eb-d90ae37efda4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034923 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--607d890d--3e41--57a1--9874--83b389fa50fb-osd--block--607d890d--3e41--57a1--9874--83b389fa50fb', 'dm-uuid-LVM-tcfEEFY9BrwSTyQheLvKc5mjGSniqt7Qw1sChWus7fPQM1wJdmFmQzYM75n7njop'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034942 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034949 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.034956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034970 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.034999 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.035012 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.035020 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.035031 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part1', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part14', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part15', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part16', 'scsi-SQEMU_QEMU_HARDDISK_3f45c181-f890-4932-9a09-2e0bc4fa8f14-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.035050 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e3daecb5--9fd0--5834--b191--078d341d10dc-osd--block--e3daecb5--9fd0--5834--b191--078d341d10dc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YLAFzg-kJmY-aUic-VBuH-g3uH-9m4L-hSfk98', 'scsi-0QEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08', 'scsi-SQEMU_QEMU_HARDDISK_7fd0bd10-abd8-4e0c-8290-4705cc531d08'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.035057 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--607d890d--3e41--57a1--9874--83b389fa50fb-osd--block--607d890d--3e41--57a1--9874--83b389fa50fb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pScOt8-ITDZ-tXnq-6HO2-2rSN-88m4-bhVjVu', 'scsi-0QEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508', 'scsi-SQEMU_QEMU_HARDDISK_194ead4c-37cf-4237-b5c2-bf752e6bc508'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.035068 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89', 'scsi-SQEMU_QEMU_HARDDISK_ba01a385-e2e8-43d7-8237-fc6e15a9de89'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.035080 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 01:03:48.035087 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.035093 | orchestrator | 2026-02-04 01:03:48.035099 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-04 01:03:48.035105 | orchestrator | Wednesday 04 February 2026 01:01:53 +0000 (0:00:00.711) 0:00:18.987 **** 2026-02-04 01:03:48.035111 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.035117 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.035123 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.035128 | orchestrator | 2026-02-04 01:03:48.035134 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-04 01:03:48.035140 | orchestrator | Wednesday 04 February 2026 01:01:54 +0000 (0:00:00.758) 0:00:19.746 **** 2026-02-04 01:03:48.035146 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.035152 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.035157 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.035163 | orchestrator | 2026-02-04 01:03:48.035170 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 01:03:48.035176 | orchestrator | Wednesday 04 February 2026 01:01:54 +0000 (0:00:00.658) 0:00:20.404 **** 2026-02-04 01:03:48.035183 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.035189 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.035195 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.035202 | orchestrator | 2026-02-04 01:03:48.035207 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 01:03:48.035214 | orchestrator | Wednesday 04 February 2026 01:01:55 +0000 (0:00:00.642) 0:00:21.047 **** 2026-02-04 01:03:48.035220 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.035226 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.035232 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.035238 | orchestrator | 2026-02-04 01:03:48.035244 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 01:03:48.035253 | orchestrator | Wednesday 04 February 2026 01:01:55 +0000 (0:00:00.340) 0:00:21.388 **** 2026-02-04 01:03:48.035259 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.035265 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.035272 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.035279 | orchestrator | 2026-02-04 01:03:48.035285 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 01:03:48.035292 | orchestrator | Wednesday 04 February 2026 01:01:56 +0000 (0:00:00.430) 0:00:21.819 **** 2026-02-04 01:03:48.035304 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.035310 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.035317 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.035323 | orchestrator | 2026-02-04 01:03:48.035328 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 01:03:48.035335 | orchestrator | Wednesday 04 February 2026 01:01:56 +0000 (0:00:00.571) 0:00:22.390 **** 2026-02-04 01:03:48.035341 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-04 01:03:48.035348 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-04 01:03:48.035354 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-04 01:03:48.035361 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-04 01:03:48.035367 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-04 01:03:48.035374 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-04 01:03:48.035380 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-04 01:03:48.035387 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-04 01:03:48.035393 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-04 01:03:48.035400 | orchestrator | 2026-02-04 01:03:48.035406 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 01:03:48.035413 | orchestrator | Wednesday 04 February 2026 01:01:57 +0000 (0:00:01.127) 0:00:23.517 **** 2026-02-04 01:03:48.035419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 01:03:48.035426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 01:03:48.035432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 01:03:48.035439 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.035445 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-04 01:03:48.035452 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-04 01:03:48.035458 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-04 01:03:48.035465 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.035471 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-04 01:03:48.035477 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-04 01:03:48.035483 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-04 01:03:48.035490 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.035496 | orchestrator | 2026-02-04 01:03:48.035503 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-04 01:03:48.035509 | orchestrator | Wednesday 04 February 2026 01:01:58 +0000 (0:00:00.406) 0:00:23.923 **** 2026-02-04 01:03:48.035517 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:03:48.035524 | orchestrator | 2026-02-04 01:03:48.035531 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 01:03:48.035539 | orchestrator | Wednesday 04 February 2026 01:01:58 +0000 (0:00:00.744) 0:00:24.668 **** 2026-02-04 01:03:48.035551 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.035559 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.035566 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.035573 | orchestrator | 2026-02-04 01:03:48.035580 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 01:03:48.035587 | orchestrator | Wednesday 04 February 2026 01:01:59 +0000 (0:00:00.353) 0:00:25.021 **** 2026-02-04 01:03:48.035595 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.035602 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.035609 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.035616 | orchestrator | 2026-02-04 01:03:48.035624 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 01:03:48.035637 | orchestrator | Wednesday 04 February 2026 01:01:59 +0000 (0:00:00.357) 0:00:25.379 **** 2026-02-04 01:03:48.035645 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.035652 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.035659 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:03:48.035665 | orchestrator | 2026-02-04 01:03:48.035672 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 01:03:48.035680 | orchestrator | Wednesday 04 February 2026 01:02:00 +0000 (0:00:00.368) 0:00:25.748 **** 2026-02-04 01:03:48.035687 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.035694 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.035701 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.035709 | orchestrator | 2026-02-04 01:03:48.035716 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 01:03:48.035723 | orchestrator | Wednesday 04 February 2026 01:02:00 +0000 (0:00:00.749) 0:00:26.497 **** 2026-02-04 01:03:48.035730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:03:48.035735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:03:48.035742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:03:48.035747 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.035753 | orchestrator | 2026-02-04 01:03:48.035783 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 01:03:48.035790 | orchestrator | Wednesday 04 February 2026 01:02:01 +0000 (0:00:00.387) 0:00:26.885 **** 2026-02-04 01:03:48.035797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:03:48.035804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:03:48.035815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:03:48.035822 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.035828 | orchestrator | 2026-02-04 01:03:48.035835 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 01:03:48.035841 | orchestrator | Wednesday 04 February 2026 01:02:01 +0000 (0:00:00.417) 0:00:27.302 **** 2026-02-04 01:03:48.035848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:03:48.035854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:03:48.035860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:03:48.035868 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.035876 | orchestrator | 2026-02-04 01:03:48.035884 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 01:03:48.035894 | orchestrator | Wednesday 04 February 2026 01:02:01 +0000 (0:00:00.377) 0:00:27.680 **** 2026-02-04 01:03:48.035903 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:48.035911 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:48.035918 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:48.035924 | orchestrator | 2026-02-04 01:03:48.035929 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 01:03:48.035934 | orchestrator | Wednesday 04 February 2026 01:02:02 +0000 (0:00:00.346) 0:00:28.026 **** 2026-02-04 01:03:48.035940 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 01:03:48.035945 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-04 01:03:48.035951 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-04 01:03:48.035957 | orchestrator | 2026-02-04 01:03:48.035962 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-04 01:03:48.035968 | orchestrator | Wednesday 04 February 2026 01:02:02 +0000 (0:00:00.614) 0:00:28.641 **** 2026-02-04 01:03:48.035973 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 01:03:48.035979 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 01:03:48.035984 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 01:03:48.035990 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-04 01:03:48.036003 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 01:03:48.036009 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 01:03:48.036016 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 01:03:48.036023 | orchestrator | 2026-02-04 01:03:48.036029 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-04 01:03:48.036036 | orchestrator | Wednesday 04 February 2026 01:02:04 +0000 (0:00:01.285) 0:00:29.926 **** 2026-02-04 01:03:48.036042 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 01:03:48.036049 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 01:03:48.036055 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 01:03:48.036062 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-04 01:03:48.036068 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 01:03:48.036075 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 01:03:48.036086 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 01:03:48.036093 | orchestrator | 2026-02-04 01:03:48.036099 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-04 01:03:48.036106 | orchestrator | Wednesday 04 February 2026 01:02:06 +0000 (0:00:02.255) 0:00:32.182 **** 2026-02-04 01:03:48.036112 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:03:48.036119 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:03:48.036125 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-04 01:03:48.036132 | orchestrator | 2026-02-04 01:03:48.036138 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-04 01:03:48.036145 | orchestrator | Wednesday 04 February 2026 01:02:06 +0000 (0:00:00.390) 0:00:32.572 **** 2026-02-04 01:03:48.036172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 01:03:48.036181 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 01:03:48.036188 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 01:03:48.036199 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 01:03:48.036205 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 01:03:48.036213 | orchestrator | 2026-02-04 01:03:48.036220 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-04 01:03:48.036226 | orchestrator | Wednesday 04 February 2026 01:02:50 +0000 (0:00:44.058) 0:01:16.631 **** 2026-02-04 01:03:48.036233 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036245 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036251 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036258 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036264 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036271 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036278 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-04 01:03:48.036284 | orchestrator | 2026-02-04 01:03:48.036291 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-04 01:03:48.036297 | orchestrator | Wednesday 04 February 2026 01:03:14 +0000 (0:00:23.851) 0:01:40.482 **** 2026-02-04 01:03:48.036304 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036311 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036317 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036324 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036330 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036337 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036343 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 01:03:48.036349 | orchestrator | 2026-02-04 01:03:48.036356 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-04 01:03:48.036362 | orchestrator | Wednesday 04 February 2026 01:03:26 +0000 (0:00:11.614) 0:01:52.096 **** 2026-02-04 01:03:48.036369 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036375 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 01:03:48.036381 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 01:03:48.036387 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036393 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 01:03:48.036403 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 01:03:48.036409 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036415 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 01:03:48.036421 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 01:03:48.036427 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036432 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 01:03:48.036438 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 01:03:48.036444 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036450 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 01:03:48.036455 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 01:03:48.036461 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 01:03:48.036467 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 01:03:48.036473 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 01:03:48.036479 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-04 01:03:48.036489 | orchestrator | 2026-02-04 01:03:48.036495 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:03:48.036502 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-04 01:03:48.036509 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-04 01:03:48.036520 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-04 01:03:48.036527 | orchestrator | 2026-02-04 01:03:48.036533 | orchestrator | 2026-02-04 01:03:48.036539 | orchestrator | 2026-02-04 01:03:48.036545 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:03:48.036551 | orchestrator | Wednesday 04 February 2026 01:03:45 +0000 (0:00:18.731) 0:02:10.827 **** 2026-02-04 01:03:48.036557 | orchestrator | =============================================================================== 2026-02-04 01:03:48.036564 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.06s 2026-02-04 01:03:48.036570 | orchestrator | generate keys ---------------------------------------------------------- 23.85s 2026-02-04 01:03:48.036575 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.73s 2026-02-04 01:03:48.036581 | orchestrator | get keys from monitors ------------------------------------------------- 11.61s 2026-02-04 01:03:48.036588 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.26s 2026-02-04 01:03:48.036593 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.13s 2026-02-04 01:03:48.036600 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.65s 2026-02-04 01:03:48.036605 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.29s 2026-02-04 01:03:48.036611 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.13s 2026-02-04 01:03:48.036618 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.93s 2026-02-04 01:03:48.036624 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.88s 2026-02-04 01:03:48.036631 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.76s 2026-02-04 01:03:48.036638 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.75s 2026-02-04 01:03:48.036644 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.74s 2026-02-04 01:03:48.036651 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.73s 2026-02-04 01:03:48.036658 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.71s 2026-02-04 01:03:48.036665 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.71s 2026-02-04 01:03:48.036672 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.71s 2026-02-04 01:03:48.036679 | orchestrator | ceph-facts : Set default osd_pool_default_crush_rule fact --------------- 0.66s 2026-02-04 01:03:48.036686 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.65s 2026-02-04 01:03:48.036693 | orchestrator | 2026-02-04 01:03:48 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:48.036699 | orchestrator | 2026-02-04 01:03:48 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:48.037425 | orchestrator | 2026-02-04 01:03:48 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:03:48.037886 | orchestrator | 2026-02-04 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:51.090096 | orchestrator | 2026-02-04 01:03:51 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:51.091913 | orchestrator | 2026-02-04 01:03:51 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:51.093942 | orchestrator | 2026-02-04 01:03:51 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:03:51.093987 | orchestrator | 2026-02-04 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:54.140531 | orchestrator | 2026-02-04 01:03:54 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:54.140905 | orchestrator | 2026-02-04 01:03:54 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:54.143330 | orchestrator | 2026-02-04 01:03:54 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:03:54.143383 | orchestrator | 2026-02-04 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:57.182869 | orchestrator | 2026-02-04 01:03:57 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:03:57.184095 | orchestrator | 2026-02-04 01:03:57 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:03:57.185295 | orchestrator | 2026-02-04 01:03:57 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:03:57.185325 | orchestrator | 2026-02-04 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:00.240128 | orchestrator | 2026-02-04 01:04:00 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:00.241148 | orchestrator | 2026-02-04 01:04:00 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:00.241913 | orchestrator | 2026-02-04 01:04:00 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:04:00.241953 | orchestrator | 2026-02-04 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:03.348428 | orchestrator | 2026-02-04 01:04:03 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:03.355458 | orchestrator | 2026-02-04 01:04:03 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:03.355541 | orchestrator | 2026-02-04 01:04:03 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:04:03.355551 | orchestrator | 2026-02-04 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:06.397401 | orchestrator | 2026-02-04 01:04:06 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:06.399665 | orchestrator | 2026-02-04 01:04:06 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:06.399702 | orchestrator | 2026-02-04 01:04:06 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:04:06.399884 | orchestrator | 2026-02-04 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:09.451756 | orchestrator | 2026-02-04 01:04:09 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:09.453741 | orchestrator | 2026-02-04 01:04:09 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:09.455736 | orchestrator | 2026-02-04 01:04:09 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:04:09.456013 | orchestrator | 2026-02-04 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:12.512000 | orchestrator | 2026-02-04 01:04:12 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:12.514281 | orchestrator | 2026-02-04 01:04:12 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:12.516648 | orchestrator | 2026-02-04 01:04:12 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:04:12.516760 | orchestrator | 2026-02-04 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:15.567236 | orchestrator | 2026-02-04 01:04:15 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:15.570969 | orchestrator | 2026-02-04 01:04:15 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:15.573637 | orchestrator | 2026-02-04 01:04:15 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:04:15.573703 | orchestrator | 2026-02-04 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:18.627480 | orchestrator | 2026-02-04 01:04:18 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:18.629854 | orchestrator | 2026-02-04 01:04:18 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:18.631756 | orchestrator | 2026-02-04 01:04:18 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:04:18.631824 | orchestrator | 2026-02-04 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:21.687110 | orchestrator | 2026-02-04 01:04:21 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:21.688760 | orchestrator | 2026-02-04 01:04:21 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:21.690299 | orchestrator | 2026-02-04 01:04:21 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:04:21.690332 | orchestrator | 2026-02-04 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:24.739862 | orchestrator | 2026-02-04 01:04:24 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:24.741718 | orchestrator | 2026-02-04 01:04:24 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:24.744803 | orchestrator | 2026-02-04 01:04:24 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:04:24.744911 | orchestrator | 2026-02-04 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:27.789645 | orchestrator | 2026-02-04 01:04:27 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:27.790947 | orchestrator | 2026-02-04 01:04:27 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:27.791881 | orchestrator | 2026-02-04 01:04:27 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state STARTED 2026-02-04 01:04:27.791906 | orchestrator | 2026-02-04 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:30.844514 | orchestrator | 2026-02-04 01:04:30 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:30.846904 | orchestrator | 2026-02-04 01:04:30 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:30.848815 | orchestrator | 2026-02-04 01:04:30 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:04:30.850592 | orchestrator | 2026-02-04 01:04:30 | INFO  | Task 0e298fc7-be1c-46c0-80bc-e669507372be is in state SUCCESS 2026-02-04 01:04:30.850640 | orchestrator | 2026-02-04 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:33.898581 | orchestrator | 2026-02-04 01:04:33 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:33.899982 | orchestrator | 2026-02-04 01:04:33 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:33.904333 | orchestrator | 2026-02-04 01:04:33 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:04:33.904381 | orchestrator | 2026-02-04 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:36.955754 | orchestrator | 2026-02-04 01:04:36 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:36.955863 | orchestrator | 2026-02-04 01:04:36 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:36.957624 | orchestrator | 2026-02-04 01:04:36 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:04:36.957902 | orchestrator | 2026-02-04 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:40.011570 | orchestrator | 2026-02-04 01:04:40 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:40.012799 | orchestrator | 2026-02-04 01:04:40 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:40.014702 | orchestrator | 2026-02-04 01:04:40 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:04:40.014756 | orchestrator | 2026-02-04 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:43.067158 | orchestrator | 2026-02-04 01:04:43 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:43.068371 | orchestrator | 2026-02-04 01:04:43 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:43.069778 | orchestrator | 2026-02-04 01:04:43 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:04:43.069976 | orchestrator | 2026-02-04 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:46.117451 | orchestrator | 2026-02-04 01:04:46 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:46.119651 | orchestrator | 2026-02-04 01:04:46 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:46.121291 | orchestrator | 2026-02-04 01:04:46 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:04:46.121364 | orchestrator | 2026-02-04 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:49.163477 | orchestrator | 2026-02-04 01:04:49 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state STARTED 2026-02-04 01:04:49.165116 | orchestrator | 2026-02-04 01:04:49 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:49.165196 | orchestrator | 2026-02-04 01:04:49 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:04:49.165212 | orchestrator | 2026-02-04 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:52.215212 | orchestrator | 2026-02-04 01:04:52 | INFO  | Task 9dea05c3-7501-47c0-8902-973d5d86909e is in state SUCCESS 2026-02-04 01:04:52.216565 | orchestrator | 2026-02-04 01:04:52.216625 | orchestrator | 2026-02-04 01:04:52.216634 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-04 01:04:52.216643 | orchestrator | 2026-02-04 01:04:52.216649 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-04 01:04:52.216657 | orchestrator | Wednesday 04 February 2026 01:03:50 +0000 (0:00:00.181) 0:00:00.181 **** 2026-02-04 01:04:52.216663 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-04 01:04:52.216671 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.216691 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.216722 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 01:04:52.216728 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.216734 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-04 01:04:52.216739 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-04 01:04:52.216745 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-04 01:04:52.216751 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-04 01:04:52.216757 | orchestrator | 2026-02-04 01:04:52.216762 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-04 01:04:52.216769 | orchestrator | Wednesday 04 February 2026 01:03:55 +0000 (0:00:04.766) 0:00:04.948 **** 2026-02-04 01:04:52.216775 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-04 01:04:52.216780 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.216786 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.216792 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 01:04:52.216797 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.216803 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-04 01:04:52.216809 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-04 01:04:52.216814 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-04 01:04:52.216819 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-04 01:04:52.216824 | orchestrator | 2026-02-04 01:04:52.216830 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-04 01:04:52.216836 | orchestrator | Wednesday 04 February 2026 01:03:59 +0000 (0:00:04.574) 0:00:09.523 **** 2026-02-04 01:04:52.216843 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 01:04:52.216872 | orchestrator | 2026-02-04 01:04:52.216878 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-04 01:04:52.216885 | orchestrator | Wednesday 04 February 2026 01:04:00 +0000 (0:00:01.102) 0:00:10.625 **** 2026-02-04 01:04:52.216890 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-04 01:04:52.216896 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.216902 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.216908 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 01:04:52.216914 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.216921 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-04 01:04:52.216927 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-04 01:04:52.216933 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-04 01:04:52.216939 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-04 01:04:52.216945 | orchestrator | 2026-02-04 01:04:52.216950 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-04 01:04:52.216964 | orchestrator | Wednesday 04 February 2026 01:04:17 +0000 (0:00:16.514) 0:00:27.140 **** 2026-02-04 01:04:52.216970 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-04 01:04:52.216974 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-04 01:04:52.216979 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-04 01:04:52.216983 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-04 01:04:52.216999 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-04 01:04:52.217003 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-04 01:04:52.217007 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-04 01:04:52.217011 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-04 01:04:52.217015 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-04 01:04:52.217019 | orchestrator | 2026-02-04 01:04:52.217029 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-04 01:04:52.217033 | orchestrator | Wednesday 04 February 2026 01:04:20 +0000 (0:00:03.275) 0:00:30.416 **** 2026-02-04 01:04:52.217037 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-04 01:04:52.217261 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.217277 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.217287 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 01:04:52.217294 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-04 01:04:52.217300 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-04 01:04:52.217306 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-04 01:04:52.217313 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-04 01:04:52.217318 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-04 01:04:52.217324 | orchestrator | 2026-02-04 01:04:52.217330 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:04:52.217336 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:04:52.217343 | orchestrator | 2026-02-04 01:04:52.217349 | orchestrator | 2026-02-04 01:04:52.217355 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:04:52.217361 | orchestrator | Wednesday 04 February 2026 01:04:28 +0000 (0:00:07.519) 0:00:37.936 **** 2026-02-04 01:04:52.217367 | orchestrator | =============================================================================== 2026-02-04 01:04:52.217373 | orchestrator | Write ceph keys to the share directory --------------------------------- 16.51s 2026-02-04 01:04:52.217379 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.52s 2026-02-04 01:04:52.217386 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.77s 2026-02-04 01:04:52.217391 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.57s 2026-02-04 01:04:52.217397 | orchestrator | Check if target directories exist --------------------------------------- 3.28s 2026-02-04 01:04:52.217403 | orchestrator | Create share directory -------------------------------------------------- 1.10s 2026-02-04 01:04:52.217408 | orchestrator | 2026-02-04 01:04:52.217414 | orchestrator | 2026-02-04 01:04:52.217421 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:04:52.217427 | orchestrator | 2026-02-04 01:04:52.217441 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:04:52.217445 | orchestrator | Wednesday 04 February 2026 01:03:07 +0000 (0:00:00.301) 0:00:00.301 **** 2026-02-04 01:04:52.217450 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.217455 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.217459 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.217463 | orchestrator | 2026-02-04 01:04:52.217467 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:04:52.217471 | orchestrator | Wednesday 04 February 2026 01:03:08 +0000 (0:00:00.337) 0:00:00.638 **** 2026-02-04 01:04:52.217475 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-04 01:04:52.217479 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-04 01:04:52.217483 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-04 01:04:52.217487 | orchestrator | 2026-02-04 01:04:52.217491 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-04 01:04:52.217495 | orchestrator | 2026-02-04 01:04:52.217499 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 01:04:52.217503 | orchestrator | Wednesday 04 February 2026 01:03:08 +0000 (0:00:00.501) 0:00:01.140 **** 2026-02-04 01:04:52.217507 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:04:52.217511 | orchestrator | 2026-02-04 01:04:52.217515 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-04 01:04:52.217519 | orchestrator | Wednesday 04 February 2026 01:03:09 +0000 (0:00:00.535) 0:00:01.675 **** 2026-02-04 01:04:52.217541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 01:04:52.217548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 01:04:52.217565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 01:04:52.217573 | orchestrator | 2026-02-04 01:04:52.217578 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-04 01:04:52.217581 | orchestrator | Wednesday 04 February 2026 01:03:10 +0000 (0:00:01.158) 0:00:02.834 **** 2026-02-04 01:04:52.217585 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.217589 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.217593 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.217597 | orchestrator | 2026-02-04 01:04:52.217601 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 01:04:52.217605 | orchestrator | Wednesday 04 February 2026 01:03:10 +0000 (0:00:00.465) 0:00:03.299 **** 2026-02-04 01:04:52.217609 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-04 01:04:52.217613 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-04 01:04:52.217616 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-04 01:04:52.217620 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-04 01:04:52.217624 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-04 01:04:52.217628 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-04 01:04:52.217632 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-04 01:04:52.217636 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-04 01:04:52.217640 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-04 01:04:52.217643 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-04 01:04:52.217647 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-04 01:04:52.217651 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-04 01:04:52.217655 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-04 01:04:52.217659 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-04 01:04:52.217662 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-04 01:04:52.217666 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-04 01:04:52.217670 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-04 01:04:52.217674 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-04 01:04:52.217678 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-04 01:04:52.217682 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-04 01:04:52.217685 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-04 01:04:52.217689 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-04 01:04:52.217696 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-04 01:04:52.217700 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-04 01:04:52.217705 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-04 01:04:52.217711 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-04 01:04:52.217717 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-04 01:04:52.217724 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-04 01:04:52.217730 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-04 01:04:52.217736 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-04 01:04:52.217742 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-04 01:04:52.217751 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-04 01:04:52.217758 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-04 01:04:52.217765 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-04 01:04:52.217770 | orchestrator | 2026-02-04 01:04:52.217777 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 01:04:52.217783 | orchestrator | Wednesday 04 February 2026 01:03:11 +0000 (0:00:00.762) 0:00:04.062 **** 2026-02-04 01:04:52.217789 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.217795 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.217801 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.217807 | orchestrator | 2026-02-04 01:04:52.217813 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 01:04:52.217819 | orchestrator | Wednesday 04 February 2026 01:03:12 +0000 (0:00:00.385) 0:00:04.448 **** 2026-02-04 01:04:52.217825 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.217831 | orchestrator | 2026-02-04 01:04:52.217836 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 01:04:52.217842 | orchestrator | Wednesday 04 February 2026 01:03:12 +0000 (0:00:00.159) 0:00:04.608 **** 2026-02-04 01:04:52.217881 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.217887 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.217894 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.217900 | orchestrator | 2026-02-04 01:04:52.217906 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 01:04:52.217912 | orchestrator | Wednesday 04 February 2026 01:03:12 +0000 (0:00:00.555) 0:00:05.164 **** 2026-02-04 01:04:52.217917 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.217923 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.217929 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.217935 | orchestrator | 2026-02-04 01:04:52.217941 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 01:04:52.217947 | orchestrator | Wednesday 04 February 2026 01:03:13 +0000 (0:00:00.321) 0:00:05.485 **** 2026-02-04 01:04:52.217953 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.217960 | orchestrator | 2026-02-04 01:04:52.217966 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 01:04:52.217973 | orchestrator | Wednesday 04 February 2026 01:03:13 +0000 (0:00:00.127) 0:00:05.613 **** 2026-02-04 01:04:52.217979 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.217985 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.217992 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.217997 | orchestrator | 2026-02-04 01:04:52.218004 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 01:04:52.218010 | orchestrator | Wednesday 04 February 2026 01:03:13 +0000 (0:00:00.329) 0:00:05.942 **** 2026-02-04 01:04:52.218065 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.218073 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.218086 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.218092 | orchestrator | 2026-02-04 01:04:52.218098 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 01:04:52.218104 | orchestrator | Wednesday 04 February 2026 01:03:13 +0000 (0:00:00.341) 0:00:06.284 **** 2026-02-04 01:04:52.218110 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218116 | orchestrator | 2026-02-04 01:04:52.218122 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 01:04:52.218128 | orchestrator | Wednesday 04 February 2026 01:03:14 +0000 (0:00:00.449) 0:00:06.734 **** 2026-02-04 01:04:52.218134 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218141 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.218147 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.218154 | orchestrator | 2026-02-04 01:04:52.218161 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 01:04:52.218175 | orchestrator | Wednesday 04 February 2026 01:03:14 +0000 (0:00:00.334) 0:00:07.068 **** 2026-02-04 01:04:52.218182 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.218187 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.218191 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.218195 | orchestrator | 2026-02-04 01:04:52.218199 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 01:04:52.218203 | orchestrator | Wednesday 04 February 2026 01:03:15 +0000 (0:00:00.354) 0:00:07.422 **** 2026-02-04 01:04:52.218207 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218211 | orchestrator | 2026-02-04 01:04:52.218215 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 01:04:52.218224 | orchestrator | Wednesday 04 February 2026 01:03:15 +0000 (0:00:00.203) 0:00:07.626 **** 2026-02-04 01:04:52.218228 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218232 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.218235 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.218239 | orchestrator | 2026-02-04 01:04:52.218243 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 01:04:52.218247 | orchestrator | Wednesday 04 February 2026 01:03:15 +0000 (0:00:00.353) 0:00:07.980 **** 2026-02-04 01:04:52.218251 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.218255 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.218259 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.218262 | orchestrator | 2026-02-04 01:04:52.218266 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 01:04:52.218356 | orchestrator | Wednesday 04 February 2026 01:03:16 +0000 (0:00:00.687) 0:00:08.667 **** 2026-02-04 01:04:52.218360 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218365 | orchestrator | 2026-02-04 01:04:52.218369 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 01:04:52.218372 | orchestrator | Wednesday 04 February 2026 01:03:16 +0000 (0:00:00.159) 0:00:08.827 **** 2026-02-04 01:04:52.218376 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218380 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.218384 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.218388 | orchestrator | 2026-02-04 01:04:52.218392 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 01:04:52.218396 | orchestrator | Wednesday 04 February 2026 01:03:16 +0000 (0:00:00.326) 0:00:09.153 **** 2026-02-04 01:04:52.218400 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.218404 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.218408 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.218411 | orchestrator | 2026-02-04 01:04:52.218415 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 01:04:52.218419 | orchestrator | Wednesday 04 February 2026 01:03:17 +0000 (0:00:00.408) 0:00:09.562 **** 2026-02-04 01:04:52.218423 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218427 | orchestrator | 2026-02-04 01:04:52.218431 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 01:04:52.218447 | orchestrator | Wednesday 04 February 2026 01:03:17 +0000 (0:00:00.132) 0:00:09.694 **** 2026-02-04 01:04:52.218451 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218454 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.218458 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.218462 | orchestrator | 2026-02-04 01:04:52.218466 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 01:04:52.218470 | orchestrator | Wednesday 04 February 2026 01:03:17 +0000 (0:00:00.353) 0:00:10.048 **** 2026-02-04 01:04:52.218474 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.218478 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.218482 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.218485 | orchestrator | 2026-02-04 01:04:52.218489 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 01:04:52.218493 | orchestrator | Wednesday 04 February 2026 01:03:18 +0000 (0:00:00.632) 0:00:10.681 **** 2026-02-04 01:04:52.218497 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218501 | orchestrator | 2026-02-04 01:04:52.218505 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 01:04:52.218509 | orchestrator | Wednesday 04 February 2026 01:03:18 +0000 (0:00:00.135) 0:00:10.816 **** 2026-02-04 01:04:52.218513 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218517 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.218521 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.218524 | orchestrator | 2026-02-04 01:04:52.218528 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 01:04:52.218532 | orchestrator | Wednesday 04 February 2026 01:03:18 +0000 (0:00:00.317) 0:00:11.134 **** 2026-02-04 01:04:52.218536 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.218540 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.218544 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.218548 | orchestrator | 2026-02-04 01:04:52.218551 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 01:04:52.218555 | orchestrator | Wednesday 04 February 2026 01:03:19 +0000 (0:00:00.394) 0:00:11.529 **** 2026-02-04 01:04:52.218559 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218563 | orchestrator | 2026-02-04 01:04:52.218567 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 01:04:52.218570 | orchestrator | Wednesday 04 February 2026 01:03:19 +0000 (0:00:00.173) 0:00:11.702 **** 2026-02-04 01:04:52.218574 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218620 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.218625 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.218629 | orchestrator | 2026-02-04 01:04:52.218632 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 01:04:52.218636 | orchestrator | Wednesday 04 February 2026 01:03:19 +0000 (0:00:00.536) 0:00:12.239 **** 2026-02-04 01:04:52.218641 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.218645 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.218648 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.218652 | orchestrator | 2026-02-04 01:04:52.218656 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 01:04:52.218660 | orchestrator | Wednesday 04 February 2026 01:03:20 +0000 (0:00:00.349) 0:00:12.589 **** 2026-02-04 01:04:52.218664 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218668 | orchestrator | 2026-02-04 01:04:52.218677 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 01:04:52.218681 | orchestrator | Wednesday 04 February 2026 01:03:20 +0000 (0:00:00.139) 0:00:12.728 **** 2026-02-04 01:04:52.218685 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218689 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.218693 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.218697 | orchestrator | 2026-02-04 01:04:52.218701 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 01:04:52.218709 | orchestrator | Wednesday 04 February 2026 01:03:20 +0000 (0:00:00.356) 0:00:13.085 **** 2026-02-04 01:04:52.218713 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:52.218717 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:52.218721 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:52.218725 | orchestrator | 2026-02-04 01:04:52.218732 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 01:04:52.218736 | orchestrator | Wednesday 04 February 2026 01:03:21 +0000 (0:00:00.371) 0:00:13.456 **** 2026-02-04 01:04:52.218740 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218744 | orchestrator | 2026-02-04 01:04:52.218748 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 01:04:52.218752 | orchestrator | Wednesday 04 February 2026 01:03:21 +0000 (0:00:00.127) 0:00:13.584 **** 2026-02-04 01:04:52.218756 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218760 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.218764 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.218767 | orchestrator | 2026-02-04 01:04:52.218771 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-04 01:04:52.218775 | orchestrator | Wednesday 04 February 2026 01:03:21 +0000 (0:00:00.563) 0:00:14.148 **** 2026-02-04 01:04:52.218779 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:04:52.218783 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:04:52.218787 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:04:52.218791 | orchestrator | 2026-02-04 01:04:52.218794 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-04 01:04:52.218798 | orchestrator | Wednesday 04 February 2026 01:03:23 +0000 (0:00:02.043) 0:00:16.192 **** 2026-02-04 01:04:52.218803 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-04 01:04:52.218807 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-04 01:04:52.218810 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-04 01:04:52.218814 | orchestrator | 2026-02-04 01:04:52.218818 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-04 01:04:52.218822 | orchestrator | Wednesday 04 February 2026 01:03:25 +0000 (0:00:01.861) 0:00:18.053 **** 2026-02-04 01:04:52.218826 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-04 01:04:52.218830 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-04 01:04:52.218834 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-04 01:04:52.218838 | orchestrator | 2026-02-04 01:04:52.218842 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-04 01:04:52.218870 | orchestrator | Wednesday 04 February 2026 01:03:28 +0000 (0:00:02.899) 0:00:20.953 **** 2026-02-04 01:04:52.218877 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-04 01:04:52.218883 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-04 01:04:52.218889 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-04 01:04:52.218896 | orchestrator | 2026-02-04 01:04:52.218902 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-04 01:04:52.218909 | orchestrator | Wednesday 04 February 2026 01:03:30 +0000 (0:00:02.286) 0:00:23.239 **** 2026-02-04 01:04:52.218916 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218922 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.218928 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.218935 | orchestrator | 2026-02-04 01:04:52.218939 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-04 01:04:52.218943 | orchestrator | Wednesday 04 February 2026 01:03:31 +0000 (0:00:00.355) 0:00:23.595 **** 2026-02-04 01:04:52.218951 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.218955 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.218959 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.218963 | orchestrator | 2026-02-04 01:04:52.218967 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 01:04:52.218971 | orchestrator | Wednesday 04 February 2026 01:03:31 +0000 (0:00:00.353) 0:00:23.948 **** 2026-02-04 01:04:52.218975 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:04:52.218979 | orchestrator | 2026-02-04 01:04:52.218983 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-04 01:04:52.218987 | orchestrator | Wednesday 04 February 2026 01:03:32 +0000 (0:00:01.048) 0:00:24.996 **** 2026-02-04 01:04:52.219004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 01:04:52.219012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 01:04:52.219031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 01:04:52.219037 | orchestrator | 2026-02-04 01:04:52.219042 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-04 01:04:52.219047 | orchestrator | Wednesday 04 February 2026 01:03:34 +0000 (0:00:02.069) 0:00:27.065 **** 2026-02-04 01:04:52.219055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 01:04:52.219067 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.219076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 01:04:52.219082 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.219092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 01:04:52.219101 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.219107 | orchestrator | 2026-02-04 01:04:52.219116 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-04 01:04:52.219123 | orchestrator | Wednesday 04 February 2026 01:03:35 +0000 (0:00:00.823) 0:00:27.889 **** 2026-02-04 01:04:52.219130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 01:04:52.219141 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.219160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 01:04:52.219168 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.219175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 01:04:52.219184 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.219191 | orchestrator | 2026-02-04 01:04:52.219198 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-04 01:04:52.219204 | orchestrator | Wednesday 04 February 2026 01:03:36 +0000 (0:00:00.953) 0:00:28.842 **** 2026-02-04 01:04:52.219219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 01:04:52.219228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 01:04:52.219251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 01:04:52.219260 | orchestrator | 2026-02-04 01:04:52.219267 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 01:04:52.219274 | orchestrator | Wednesday 04 February 2026 01:03:38 +0000 (0:00:01.822) 0:00:30.665 **** 2026-02-04 01:04:52.219281 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:52.219286 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:52.219295 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:52.219299 | orchestrator | 2026-02-04 01:04:52.219303 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 01:04:52.219307 | orchestrator | Wednesday 04 February 2026 01:03:38 +0000 (0:00:00.331) 0:00:30.996 **** 2026-02-04 01:04:52.219311 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:04:52.219315 | orchestrator | 2026-02-04 01:04:52.219319 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-04 01:04:52.219323 | orchestrator | Wednesday 04 February 2026 01:03:39 +0000 (0:00:00.670) 0:00:31.666 **** 2026-02-04 01:04:52.219327 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:04:52.219331 | orchestrator | 2026-02-04 01:04:52.219335 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-04 01:04:52.219339 | orchestrator | Wednesday 04 February 2026 01:03:42 +0000 (0:00:02.915) 0:00:34.582 **** 2026-02-04 01:04:52.219343 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:04:52.219347 | orchestrator | 2026-02-04 01:04:52.219351 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-04 01:04:52.219355 | orchestrator | Wednesday 04 February 2026 01:03:45 +0000 (0:00:02.911) 0:00:37.493 **** 2026-02-04 01:04:52.219359 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:04:52.219363 | orchestrator | 2026-02-04 01:04:52.219367 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-04 01:04:52.219371 | orchestrator | Wednesday 04 February 2026 01:04:01 +0000 (0:00:16.600) 0:00:54.094 **** 2026-02-04 01:04:52.219375 | orchestrator | 2026-02-04 01:04:52.219379 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-04 01:04:52.219383 | orchestrator | Wednesday 04 February 2026 01:04:01 +0000 (0:00:00.075) 0:00:54.169 **** 2026-02-04 01:04:52.219387 | orchestrator | 2026-02-04 01:04:52.219391 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-04 01:04:52.219395 | orchestrator | Wednesday 04 February 2026 01:04:01 +0000 (0:00:00.071) 0:00:54.241 **** 2026-02-04 01:04:52.219398 | orchestrator | 2026-02-04 01:04:52.219402 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-04 01:04:52.219406 | orchestrator | Wednesday 04 February 2026 01:04:01 +0000 (0:00:00.085) 0:00:54.327 **** 2026-02-04 01:04:52.219410 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:04:52.219415 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:04:52.219419 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:04:52.219423 | orchestrator | 2026-02-04 01:04:52.219426 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:04:52.219430 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 01:04:52.219436 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-04 01:04:52.219440 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-04 01:04:52.219444 | orchestrator | 2026-02-04 01:04:52.219448 | orchestrator | 2026-02-04 01:04:52.219454 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:04:52.219458 | orchestrator | Wednesday 04 February 2026 01:04:48 +0000 (0:00:46.660) 0:01:40.988 **** 2026-02-04 01:04:52.219462 | orchestrator | =============================================================================== 2026-02-04 01:04:52.219467 | orchestrator | horizon : Restart horizon container ------------------------------------ 46.66s 2026-02-04 01:04:52.219471 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.60s 2026-02-04 01:04:52.219475 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.92s 2026-02-04 01:04:52.219482 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.91s 2026-02-04 01:04:52.219489 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.90s 2026-02-04 01:04:52.219493 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.29s 2026-02-04 01:04:52.219497 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.07s 2026-02-04 01:04:52.219501 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.04s 2026-02-04 01:04:52.219505 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.86s 2026-02-04 01:04:52.219509 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.82s 2026-02-04 01:04:52.219513 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.16s 2026-02-04 01:04:52.219518 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.05s 2026-02-04 01:04:52.219522 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.95s 2026-02-04 01:04:52.219526 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.82s 2026-02-04 01:04:52.219530 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2026-02-04 01:04:52.219534 | orchestrator | horizon : Update policy file name --------------------------------------- 0.69s 2026-02-04 01:04:52.219538 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2026-02-04 01:04:52.219542 | orchestrator | horizon : Update policy file name --------------------------------------- 0.63s 2026-02-04 01:04:52.219545 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2026-02-04 01:04:52.219549 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2026-02-04 01:04:52.219553 | orchestrator | 2026-02-04 01:04:52 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:52.219558 | orchestrator | 2026-02-04 01:04:52 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:04:52.219562 | orchestrator | 2026-02-04 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:55.259789 | orchestrator | 2026-02-04 01:04:55 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:55.264459 | orchestrator | 2026-02-04 01:04:55 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:04:55.264533 | orchestrator | 2026-02-04 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:58.304352 | orchestrator | 2026-02-04 01:04:58 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:04:58.306718 | orchestrator | 2026-02-04 01:04:58 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:04:58.306772 | orchestrator | 2026-02-04 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:01.343545 | orchestrator | 2026-02-04 01:05:01 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:01.345379 | orchestrator | 2026-02-04 01:05:01 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:05:01.345422 | orchestrator | 2026-02-04 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:04.393497 | orchestrator | 2026-02-04 01:05:04 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:04.396378 | orchestrator | 2026-02-04 01:05:04 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:05:04.396432 | orchestrator | 2026-02-04 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:07.433088 | orchestrator | 2026-02-04 01:05:07 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:07.435178 | orchestrator | 2026-02-04 01:05:07 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:05:07.435241 | orchestrator | 2026-02-04 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:10.484130 | orchestrator | 2026-02-04 01:05:10 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:10.488596 | orchestrator | 2026-02-04 01:05:10 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:05:10.489208 | orchestrator | 2026-02-04 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:13.539578 | orchestrator | 2026-02-04 01:05:13 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:13.540745 | orchestrator | 2026-02-04 01:05:13 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:05:13.540797 | orchestrator | 2026-02-04 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:16.591102 | orchestrator | 2026-02-04 01:05:16 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:16.591300 | orchestrator | 2026-02-04 01:05:16 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:05:16.591313 | orchestrator | 2026-02-04 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:19.638302 | orchestrator | 2026-02-04 01:05:19 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:19.639184 | orchestrator | 2026-02-04 01:05:19 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:05:19.639625 | orchestrator | 2026-02-04 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:22.683216 | orchestrator | 2026-02-04 01:05:22 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:22.685130 | orchestrator | 2026-02-04 01:05:22 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:05:22.685172 | orchestrator | 2026-02-04 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:25.722453 | orchestrator | 2026-02-04 01:05:25 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:25.723642 | orchestrator | 2026-02-04 01:05:25 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:05:25.723675 | orchestrator | 2026-02-04 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:28.766929 | orchestrator | 2026-02-04 01:05:28 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:28.769170 | orchestrator | 2026-02-04 01:05:28 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state STARTED 2026-02-04 01:05:28.770210 | orchestrator | 2026-02-04 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:31.816999 | orchestrator | 2026-02-04 01:05:31 | INFO  | Task ab83ac70-8130-4160-8913-4010a62a2602 is in state STARTED 2026-02-04 01:05:31.818840 | orchestrator | 2026-02-04 01:05:31 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:31.821263 | orchestrator | 2026-02-04 01:05:31 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:05:31.824484 | orchestrator | 2026-02-04 01:05:31 | INFO  | Task 124114d8-76f0-4df6-94f4-252196ad2820 is in state SUCCESS 2026-02-04 01:05:31.825932 | orchestrator | 2026-02-04 01:05:31 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:05:31.826281 | orchestrator | 2026-02-04 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:34.879720 | orchestrator | 2026-02-04 01:05:34 | INFO  | Task ab83ac70-8130-4160-8913-4010a62a2602 is in state STARTED 2026-02-04 01:05:34.884774 | orchestrator | 2026-02-04 01:05:34 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:34.887314 | orchestrator | 2026-02-04 01:05:34 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:05:34.889281 | orchestrator | 2026-02-04 01:05:34 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:05:34.889389 | orchestrator | 2026-02-04 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:37.919228 | orchestrator | 2026-02-04 01:05:37 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:05:37.920106 | orchestrator | 2026-02-04 01:05:37 | INFO  | Task ab83ac70-8130-4160-8913-4010a62a2602 is in state SUCCESS 2026-02-04 01:05:37.921049 | orchestrator | 2026-02-04 01:05:37 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:37.922183 | orchestrator | 2026-02-04 01:05:37 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:05:37.923645 | orchestrator | 2026-02-04 01:05:37 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:05:37.924966 | orchestrator | 2026-02-04 01:05:37 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:05:37.925170 | orchestrator | 2026-02-04 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:40.978982 | orchestrator | 2026-02-04 01:05:40 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:05:40.981091 | orchestrator | 2026-02-04 01:05:40 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:40.981916 | orchestrator | 2026-02-04 01:05:40 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:05:40.983133 | orchestrator | 2026-02-04 01:05:40 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:05:40.984137 | orchestrator | 2026-02-04 01:05:40 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:05:40.984175 | orchestrator | 2026-02-04 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:44.027843 | orchestrator | 2026-02-04 01:05:44 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:05:44.028875 | orchestrator | 2026-02-04 01:05:44 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:44.029989 | orchestrator | 2026-02-04 01:05:44 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:05:44.030723 | orchestrator | 2026-02-04 01:05:44 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:05:44.031879 | orchestrator | 2026-02-04 01:05:44 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:05:44.031918 | orchestrator | 2026-02-04 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:47.092887 | orchestrator | 2026-02-04 01:05:47 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:05:47.093012 | orchestrator | 2026-02-04 01:05:47 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:47.093020 | orchestrator | 2026-02-04 01:05:47 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:05:47.093025 | orchestrator | 2026-02-04 01:05:47 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:05:47.093030 | orchestrator | 2026-02-04 01:05:47 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:05:47.093055 | orchestrator | 2026-02-04 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:50.131996 | orchestrator | 2026-02-04 01:05:50 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:05:50.132610 | orchestrator | 2026-02-04 01:05:50 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:50.134207 | orchestrator | 2026-02-04 01:05:50 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:05:50.136274 | orchestrator | 2026-02-04 01:05:50 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:05:50.137626 | orchestrator | 2026-02-04 01:05:50 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:05:50.137670 | orchestrator | 2026-02-04 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:53.179225 | orchestrator | 2026-02-04 01:05:53 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:05:53.179443 | orchestrator | 2026-02-04 01:05:53 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:53.180571 | orchestrator | 2026-02-04 01:05:53 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:05:53.181339 | orchestrator | 2026-02-04 01:05:53 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:05:53.182221 | orchestrator | 2026-02-04 01:05:53 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:05:53.182255 | orchestrator | 2026-02-04 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:56.224782 | orchestrator | 2026-02-04 01:05:56 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:05:56.225484 | orchestrator | 2026-02-04 01:05:56 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state STARTED 2026-02-04 01:05:56.227268 | orchestrator | 2026-02-04 01:05:56 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:05:56.228718 | orchestrator | 2026-02-04 01:05:56 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:05:56.229609 | orchestrator | 2026-02-04 01:05:56 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:05:56.230249 | orchestrator | 2026-02-04 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:59.267677 | orchestrator | 2026-02-04 01:05:59 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:05:59.268395 | orchestrator | 2026-02-04 01:05:59 | INFO  | Task 737a2e67-91d8-4512-aad2-8b11461da41a is in state SUCCESS 2026-02-04 01:05:59.268688 | orchestrator | 2026-02-04 01:05:59.268714 | orchestrator | 2026-02-04 01:05:59.268721 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-04 01:05:59.268728 | orchestrator | 2026-02-04 01:05:59.268744 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-04 01:05:59.268750 | orchestrator | Wednesday 04 February 2026 01:04:33 +0000 (0:00:00.264) 0:00:00.264 **** 2026-02-04 01:05:59.268756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-04 01:05:59.268763 | orchestrator | 2026-02-04 01:05:59.268769 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-04 01:05:59.268775 | orchestrator | Wednesday 04 February 2026 01:04:34 +0000 (0:00:00.240) 0:00:00.505 **** 2026-02-04 01:05:59.268782 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-04 01:05:59.268802 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-04 01:05:59.268810 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-04 01:05:59.268816 | orchestrator | 2026-02-04 01:05:59.268823 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-04 01:05:59.268829 | orchestrator | Wednesday 04 February 2026 01:04:35 +0000 (0:00:01.345) 0:00:01.851 **** 2026-02-04 01:05:59.268845 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-04 01:05:59.268857 | orchestrator | 2026-02-04 01:05:59.268863 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-04 01:05:59.268869 | orchestrator | Wednesday 04 February 2026 01:04:37 +0000 (0:00:01.594) 0:00:03.445 **** 2026-02-04 01:05:59.268876 | orchestrator | changed: [testbed-manager] 2026-02-04 01:05:59.268883 | orchestrator | 2026-02-04 01:05:59.268890 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-04 01:05:59.268896 | orchestrator | Wednesday 04 February 2026 01:04:38 +0000 (0:00:01.018) 0:00:04.464 **** 2026-02-04 01:05:59.268903 | orchestrator | changed: [testbed-manager] 2026-02-04 01:05:59.268912 | orchestrator | 2026-02-04 01:05:59.268944 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-04 01:05:59.268951 | orchestrator | Wednesday 04 February 2026 01:04:39 +0000 (0:00:00.965) 0:00:05.430 **** 2026-02-04 01:05:59.268958 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-04 01:05:59.268964 | orchestrator | ok: [testbed-manager] 2026-02-04 01:05:59.268969 | orchestrator | 2026-02-04 01:05:59.268973 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-04 01:05:59.268977 | orchestrator | Wednesday 04 February 2026 01:05:18 +0000 (0:00:39.745) 0:00:45.175 **** 2026-02-04 01:05:59.268981 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-04 01:05:59.268985 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-04 01:05:59.268989 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-04 01:05:59.268993 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-04 01:05:59.268997 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-04 01:05:59.269000 | orchestrator | 2026-02-04 01:05:59.269004 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-04 01:05:59.269057 | orchestrator | Wednesday 04 February 2026 01:05:23 +0000 (0:00:04.326) 0:00:49.501 **** 2026-02-04 01:05:59.269063 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-04 01:05:59.269069 | orchestrator | 2026-02-04 01:05:59.269074 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-04 01:05:59.269079 | orchestrator | Wednesday 04 February 2026 01:05:23 +0000 (0:00:00.483) 0:00:49.985 **** 2026-02-04 01:05:59.269089 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:05:59.269096 | orchestrator | 2026-02-04 01:05:59.269102 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-04 01:05:59.269109 | orchestrator | Wednesday 04 February 2026 01:05:23 +0000 (0:00:00.129) 0:00:50.114 **** 2026-02-04 01:05:59.269115 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:05:59.269121 | orchestrator | 2026-02-04 01:05:59.269127 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-04 01:05:59.269132 | orchestrator | Wednesday 04 February 2026 01:05:24 +0000 (0:00:00.561) 0:00:50.676 **** 2026-02-04 01:05:59.269138 | orchestrator | changed: [testbed-manager] 2026-02-04 01:05:59.269143 | orchestrator | 2026-02-04 01:05:59.269149 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-04 01:05:59.269154 | orchestrator | Wednesday 04 February 2026 01:05:25 +0000 (0:00:01.476) 0:00:52.152 **** 2026-02-04 01:05:59.269159 | orchestrator | changed: [testbed-manager] 2026-02-04 01:05:59.269165 | orchestrator | 2026-02-04 01:05:59.269170 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-04 01:05:59.269183 | orchestrator | Wednesday 04 February 2026 01:05:26 +0000 (0:00:00.916) 0:00:53.069 **** 2026-02-04 01:05:59.269188 | orchestrator | changed: [testbed-manager] 2026-02-04 01:05:59.269194 | orchestrator | 2026-02-04 01:05:59.269199 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-04 01:05:59.269205 | orchestrator | Wednesday 04 February 2026 01:05:27 +0000 (0:00:00.649) 0:00:53.719 **** 2026-02-04 01:05:59.269211 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-04 01:05:59.269217 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-04 01:05:59.269223 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-04 01:05:59.269229 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-04 01:05:59.269334 | orchestrator | 2026-02-04 01:05:59.269342 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:05:59.269349 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:05:59.269355 | orchestrator | 2026-02-04 01:05:59.269361 | orchestrator | 2026-02-04 01:05:59.269376 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:05:59.269382 | orchestrator | Wednesday 04 February 2026 01:05:28 +0000 (0:00:01.534) 0:00:55.253 **** 2026-02-04 01:05:59.269394 | orchestrator | =============================================================================== 2026-02-04 01:05:59.269400 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.75s 2026-02-04 01:05:59.269406 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.33s 2026-02-04 01:05:59.269412 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.59s 2026-02-04 01:05:59.269417 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2026-02-04 01:05:59.269423 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.48s 2026-02-04 01:05:59.269429 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.35s 2026-02-04 01:05:59.269435 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.02s 2026-02-04 01:05:59.269441 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2026-02-04 01:05:59.269447 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.92s 2026-02-04 01:05:59.269453 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2026-02-04 01:05:59.269459 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.56s 2026-02-04 01:05:59.269465 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-02-04 01:05:59.269471 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-02-04 01:05:59.269478 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-02-04 01:05:59.269484 | orchestrator | 2026-02-04 01:05:59.269490 | orchestrator | 2026-02-04 01:05:59.269495 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:05:59.269501 | orchestrator | 2026-02-04 01:05:59.269507 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:05:59.269513 | orchestrator | Wednesday 04 February 2026 01:05:34 +0000 (0:00:00.191) 0:00:00.191 **** 2026-02-04 01:05:59.269520 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:59.269526 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:05:59.269532 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:05:59.269539 | orchestrator | 2026-02-04 01:05:59.269545 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:05:59.269552 | orchestrator | Wednesday 04 February 2026 01:05:34 +0000 (0:00:00.334) 0:00:00.525 **** 2026-02-04 01:05:59.269558 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-04 01:05:59.269564 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-04 01:05:59.269571 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-04 01:05:59.269698 | orchestrator | 2026-02-04 01:05:59.269708 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-02-04 01:05:59.269714 | orchestrator | 2026-02-04 01:05:59.269720 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-02-04 01:05:59.269727 | orchestrator | Wednesday 04 February 2026 01:05:35 +0000 (0:00:00.881) 0:00:01.407 **** 2026-02-04 01:05:59.269733 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:59.269739 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:05:59.269745 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:05:59.269796 | orchestrator | 2026-02-04 01:05:59.269803 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:05:59.269835 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:05:59.269844 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:05:59.269850 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:05:59.269856 | orchestrator | 2026-02-04 01:05:59.269860 | orchestrator | 2026-02-04 01:05:59.269864 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:05:59.269868 | orchestrator | Wednesday 04 February 2026 01:05:36 +0000 (0:00:00.719) 0:00:02.126 **** 2026-02-04 01:05:59.269871 | orchestrator | =============================================================================== 2026-02-04 01:05:59.269875 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2026-02-04 01:05:59.269879 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.72s 2026-02-04 01:05:59.269883 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-02-04 01:05:59.269887 | orchestrator | 2026-02-04 01:05:59.269895 | orchestrator | 2026-02-04 01:05:59.269899 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:05:59.269971 | orchestrator | 2026-02-04 01:05:59.269978 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:05:59.269986 | orchestrator | Wednesday 04 February 2026 01:03:07 +0000 (0:00:00.295) 0:00:00.295 **** 2026-02-04 01:05:59.269995 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:59.270001 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:05:59.270009 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:05:59.270044 | orchestrator | 2026-02-04 01:05:59.270051 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:05:59.270057 | orchestrator | Wednesday 04 February 2026 01:03:07 +0000 (0:00:00.316) 0:00:00.611 **** 2026-02-04 01:05:59.270064 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-04 01:05:59.270070 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-04 01:05:59.270076 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-04 01:05:59.270082 | orchestrator | 2026-02-04 01:05:59.270088 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-04 01:05:59.270094 | orchestrator | 2026-02-04 01:05:59.270100 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 01:05:59.270111 | orchestrator | Wednesday 04 February 2026 01:03:08 +0000 (0:00:00.482) 0:00:01.094 **** 2026-02-04 01:05:59.270118 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:05:59.270125 | orchestrator | 2026-02-04 01:05:59.270132 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-04 01:05:59.270138 | orchestrator | Wednesday 04 February 2026 01:03:08 +0000 (0:00:00.568) 0:00:01.662 **** 2026-02-04 01:05:59.270148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270208 | orchestrator | 2026-02-04 01:05:59.270212 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-04 01:05:59.270216 | orchestrator | Wednesday 04 February 2026 01:03:10 +0000 (0:00:01.852) 0:00:03.514 **** 2026-02-04 01:05:59.270220 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.270224 | orchestrator | 2026-02-04 01:05:59.270230 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-04 01:05:59.270234 | orchestrator | Wednesday 04 February 2026 01:03:10 +0000 (0:00:00.161) 0:00:03.675 **** 2026-02-04 01:05:59.270238 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.270242 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.270246 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.270396 | orchestrator | 2026-02-04 01:05:59.270403 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-04 01:05:59.270407 | orchestrator | Wednesday 04 February 2026 01:03:11 +0000 (0:00:00.602) 0:00:04.278 **** 2026-02-04 01:05:59.270411 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:05:59.270415 | orchestrator | 2026-02-04 01:05:59.270419 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 01:05:59.270423 | orchestrator | Wednesday 04 February 2026 01:03:12 +0000 (0:00:00.940) 0:00:05.218 **** 2026-02-04 01:05:59.270427 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:05:59.270434 | orchestrator | 2026-02-04 01:05:59.270438 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-04 01:05:59.270441 | orchestrator | Wednesday 04 February 2026 01:03:12 +0000 (0:00:00.622) 0:00:05.841 **** 2026-02-04 01:05:59.270448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270494 | orchestrator | 2026-02-04 01:05:59.270498 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-04 01:05:59.270502 | orchestrator | Wednesday 04 February 2026 01:03:16 +0000 (0:00:04.024) 0:00:09.866 **** 2026-02-04 01:05:59.270510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 01:05:59.270522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 01:05:59.270530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 01:05:59.270534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 01:05:59.270543 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.270547 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.270555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 01:05:59.270563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 01:05:59.270571 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.270575 | orchestrator | 2026-02-04 01:05:59.270579 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-04 01:05:59.270582 | orchestrator | Wednesday 04 February 2026 01:03:17 +0000 (0:00:00.695) 0:00:10.562 **** 2026-02-04 01:05:59.270587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 01:05:59.270591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 01:05:59.270606 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.270611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 01:05:59.270616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 01:05:59.270624 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.270628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 01:05:59.270637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 01:05:59.270648 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.270651 | orchestrator | 2026-02-04 01:05:59.270655 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-04 01:05:59.270659 | orchestrator | Wednesday 04 February 2026 01:03:18 +0000 (0:00:00.857) 0:00:11.420 **** 2026-02-04 01:05:59.270663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270710 | orchestrator | 2026-02-04 01:05:59.270714 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-04 01:05:59.270718 | orchestrator | Wednesday 04 February 2026 01:03:21 +0000 (0:00:03.307) 0:00:14.727 **** 2026-02-04 01:05:59.270725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.270768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.270787 | orchestrator | 2026-02-04 01:05:59.270791 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-04 01:05:59.270794 | orchestrator | Wednesday 04 February 2026 01:03:28 +0000 (0:00:06.399) 0:00:21.127 **** 2026-02-04 01:05:59.270798 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.270802 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:59.270809 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:59.270812 | orchestrator | 2026-02-04 01:05:59.270816 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-04 01:05:59.270820 | orchestrator | Wednesday 04 February 2026 01:03:29 +0000 (0:00:01.584) 0:00:22.712 **** 2026-02-04 01:05:59.270824 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.270828 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.270832 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.270836 | orchestrator | 2026-02-04 01:05:59.270839 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-04 01:05:59.270843 | orchestrator | Wednesday 04 February 2026 01:03:30 +0000 (0:00:00.716) 0:00:23.428 **** 2026-02-04 01:05:59.270847 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.270851 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.270854 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.270858 | orchestrator | 2026-02-04 01:05:59.270862 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-04 01:05:59.270866 | orchestrator | Wednesday 04 February 2026 01:03:30 +0000 (0:00:00.360) 0:00:23.789 **** 2026-02-04 01:05:59.270870 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.270873 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.270877 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.270881 | orchestrator | 2026-02-04 01:05:59.270885 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-04 01:05:59.270889 | orchestrator | Wednesday 04 February 2026 01:03:31 +0000 (0:00:00.555) 0:00:24.344 **** 2026-02-04 01:05:59.270896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 01:05:59.270902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 01:05:59.270907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 01:05:59.270955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 01:05:59.270962 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.270969 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.270978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 01:05:59.270985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 01:05:59.270992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 01:05:59.271002 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.271008 | orchestrator | 2026-02-04 01:05:59.271014 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 01:05:59.271019 | orchestrator | Wednesday 04 February 2026 01:03:32 +0000 (0:00:00.843) 0:00:25.188 **** 2026-02-04 01:05:59.271026 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.271031 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.271037 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.271043 | orchestrator | 2026-02-04 01:05:59.271050 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-04 01:05:59.271056 | orchestrator | Wednesday 04 February 2026 01:03:32 +0000 (0:00:00.432) 0:00:25.620 **** 2026-02-04 01:05:59.271063 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-04 01:05:59.271069 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-04 01:05:59.271074 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-04 01:05:59.271080 | orchestrator | 2026-02-04 01:05:59.271086 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-04 01:05:59.271092 | orchestrator | Wednesday 04 February 2026 01:03:35 +0000 (0:00:02.324) 0:00:27.944 **** 2026-02-04 01:05:59.271097 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:05:59.271104 | orchestrator | 2026-02-04 01:05:59.271111 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-04 01:05:59.271118 | orchestrator | Wednesday 04 February 2026 01:03:36 +0000 (0:00:01.129) 0:00:29.074 **** 2026-02-04 01:05:59.271124 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.271131 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.271138 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.271145 | orchestrator | 2026-02-04 01:05:59.271152 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-04 01:05:59.271209 | orchestrator | Wednesday 04 February 2026 01:03:37 +0000 (0:00:01.051) 0:00:30.125 **** 2026-02-04 01:05:59.271216 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:05:59.271220 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 01:05:59.271225 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 01:05:59.271230 | orchestrator | 2026-02-04 01:05:59.271235 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-04 01:05:59.271243 | orchestrator | Wednesday 04 February 2026 01:03:38 +0000 (0:00:01.430) 0:00:31.556 **** 2026-02-04 01:05:59.271248 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:59.271252 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:05:59.271257 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:05:59.271261 | orchestrator | 2026-02-04 01:05:59.271266 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-04 01:05:59.271270 | orchestrator | Wednesday 04 February 2026 01:03:39 +0000 (0:00:00.374) 0:00:31.931 **** 2026-02-04 01:05:59.271275 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-04 01:05:59.271279 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-04 01:05:59.271284 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-04 01:05:59.271293 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-04 01:05:59.271298 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-04 01:05:59.271303 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-04 01:05:59.271310 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-04 01:05:59.271315 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-04 01:05:59.271320 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-04 01:05:59.271324 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-04 01:05:59.271329 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-04 01:05:59.271334 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-04 01:05:59.271338 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-04 01:05:59.271343 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-04 01:05:59.271347 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-04 01:05:59.271352 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 01:05:59.271357 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 01:05:59.271361 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 01:05:59.271366 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 01:05:59.271371 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 01:05:59.271375 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 01:05:59.271379 | orchestrator | 2026-02-04 01:05:59.271384 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-04 01:05:59.271389 | orchestrator | Wednesday 04 February 2026 01:03:49 +0000 (0:00:10.404) 0:00:42.336 **** 2026-02-04 01:05:59.271393 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 01:05:59.271398 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 01:05:59.271403 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 01:05:59.271408 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 01:05:59.271412 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 01:05:59.271417 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 01:05:59.271422 | orchestrator | 2026-02-04 01:05:59.271426 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-04 01:05:59.271432 | orchestrator | Wednesday 04 February 2026 01:03:52 +0000 (0:00:03.092) 0:00:45.429 **** 2026-02-04 01:05:59.271442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.271457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.271465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 01:05:59.271471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.271477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.271483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 01:05:59.271495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.271505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.271512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 01:05:59.271518 | orchestrator | 2026-02-04 01:05:59.271524 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 01:05:59.271530 | orchestrator | Wednesday 04 February 2026 01:03:54 +0000 (0:00:02.462) 0:00:47.891 **** 2026-02-04 01:05:59.271535 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.271541 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.271547 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.271553 | orchestrator | 2026-02-04 01:05:59.271559 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-04 01:05:59.271564 | orchestrator | Wednesday 04 February 2026 01:03:55 +0000 (0:00:00.342) 0:00:48.234 **** 2026-02-04 01:05:59.271571 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.271576 | orchestrator | 2026-02-04 01:05:59.271582 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-04 01:05:59.271588 | orchestrator | Wednesday 04 February 2026 01:03:57 +0000 (0:00:02.179) 0:00:50.413 **** 2026-02-04 01:05:59.271597 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.271604 | orchestrator | 2026-02-04 01:05:59.271612 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-04 01:05:59.271617 | orchestrator | Wednesday 04 February 2026 01:04:00 +0000 (0:00:02.819) 0:00:53.233 **** 2026-02-04 01:05:59.271623 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:59.271630 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:05:59.271636 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:05:59.271641 | orchestrator | 2026-02-04 01:05:59.271647 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-04 01:05:59.271652 | orchestrator | Wednesday 04 February 2026 01:04:01 +0000 (0:00:01.274) 0:00:54.508 **** 2026-02-04 01:05:59.271662 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:59.271667 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:05:59.271673 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:05:59.271679 | orchestrator | 2026-02-04 01:05:59.271686 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-04 01:05:59.271693 | orchestrator | Wednesday 04 February 2026 01:04:01 +0000 (0:00:00.354) 0:00:54.862 **** 2026-02-04 01:05:59.271699 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.271706 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.271712 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.271720 | orchestrator | 2026-02-04 01:05:59.271724 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-04 01:05:59.271728 | orchestrator | Wednesday 04 February 2026 01:04:02 +0000 (0:00:00.460) 0:00:55.323 **** 2026-02-04 01:05:59.271731 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.271735 | orchestrator | 2026-02-04 01:05:59.271739 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-04 01:05:59.271743 | orchestrator | Wednesday 04 February 2026 01:04:17 +0000 (0:00:15.060) 0:01:10.384 **** 2026-02-04 01:05:59.271747 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.271751 | orchestrator | 2026-02-04 01:05:59.271754 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-04 01:05:59.271758 | orchestrator | Wednesday 04 February 2026 01:04:28 +0000 (0:00:10.869) 0:01:21.253 **** 2026-02-04 01:05:59.271762 | orchestrator | 2026-02-04 01:05:59.271766 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-04 01:05:59.271770 | orchestrator | Wednesday 04 February 2026 01:04:28 +0000 (0:00:00.068) 0:01:21.322 **** 2026-02-04 01:05:59.271773 | orchestrator | 2026-02-04 01:05:59.271777 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-04 01:05:59.271784 | orchestrator | Wednesday 04 February 2026 01:04:28 +0000 (0:00:00.066) 0:01:21.389 **** 2026-02-04 01:05:59.271789 | orchestrator | 2026-02-04 01:05:59.271793 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-04 01:05:59.271798 | orchestrator | Wednesday 04 February 2026 01:04:28 +0000 (0:00:00.081) 0:01:21.470 **** 2026-02-04 01:05:59.271805 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.271812 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:59.271819 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:59.271825 | orchestrator | 2026-02-04 01:05:59.271830 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-04 01:05:59.271836 | orchestrator | Wednesday 04 February 2026 01:04:38 +0000 (0:00:10.015) 0:01:31.486 **** 2026-02-04 01:05:59.271843 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:59.271849 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.271854 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:59.271859 | orchestrator | 2026-02-04 01:05:59.271864 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-04 01:05:59.271871 | orchestrator | Wednesday 04 February 2026 01:04:48 +0000 (0:00:09.902) 0:01:41.388 **** 2026-02-04 01:05:59.271877 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.271883 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:59.271890 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:59.271896 | orchestrator | 2026-02-04 01:05:59.271906 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 01:05:59.271913 | orchestrator | Wednesday 04 February 2026 01:05:00 +0000 (0:00:11.861) 0:01:53.250 **** 2026-02-04 01:05:59.272007 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:05:59.272014 | orchestrator | 2026-02-04 01:05:59.272018 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-04 01:05:59.272022 | orchestrator | Wednesday 04 February 2026 01:05:01 +0000 (0:00:00.824) 0:01:54.074 **** 2026-02-04 01:05:59.272026 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:05:59.272034 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:59.272038 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:05:59.272042 | orchestrator | 2026-02-04 01:05:59.272046 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-04 01:05:59.272050 | orchestrator | Wednesday 04 February 2026 01:05:01 +0000 (0:00:00.780) 0:01:54.855 **** 2026-02-04 01:05:59.272054 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.272057 | orchestrator | 2026-02-04 01:05:59.272061 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-04 01:05:59.272065 | orchestrator | Wednesday 04 February 2026 01:05:03 +0000 (0:00:01.781) 0:01:56.637 **** 2026-02-04 01:05:59.272069 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-04 01:05:59.272073 | orchestrator | 2026-02-04 01:05:59.272077 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-04 01:05:59.272080 | orchestrator | Wednesday 04 February 2026 01:05:16 +0000 (0:00:12.374) 0:02:09.012 **** 2026-02-04 01:05:59.272084 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-04 01:05:59.272088 | orchestrator | 2026-02-04 01:05:59.272092 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-04 01:05:59.272154 | orchestrator | Wednesday 04 February 2026 01:05:44 +0000 (0:00:28.130) 0:02:37.142 **** 2026-02-04 01:05:59.272160 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-04 01:05:59.272164 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-04 01:05:59.272168 | orchestrator | 2026-02-04 01:05:59.272172 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-04 01:05:59.272176 | orchestrator | Wednesday 04 February 2026 01:05:52 +0000 (0:00:07.898) 0:02:45.040 **** 2026-02-04 01:05:59.272180 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.272184 | orchestrator | 2026-02-04 01:05:59.272188 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-04 01:05:59.272192 | orchestrator | Wednesday 04 February 2026 01:05:52 +0000 (0:00:00.187) 0:02:45.228 **** 2026-02-04 01:05:59.272195 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.272199 | orchestrator | 2026-02-04 01:05:59.272203 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-04 01:05:59.272207 | orchestrator | Wednesday 04 February 2026 01:05:52 +0000 (0:00:00.213) 0:02:45.442 **** 2026-02-04 01:05:59.272211 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.272215 | orchestrator | 2026-02-04 01:05:59.272219 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-04 01:05:59.272222 | orchestrator | Wednesday 04 February 2026 01:05:52 +0000 (0:00:00.248) 0:02:45.691 **** 2026-02-04 01:05:59.272226 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.272230 | orchestrator | 2026-02-04 01:05:59.272234 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-04 01:05:59.272238 | orchestrator | Wednesday 04 February 2026 01:05:53 +0000 (0:00:00.758) 0:02:46.449 **** 2026-02-04 01:05:59.272242 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:59.272246 | orchestrator | 2026-02-04 01:05:59.272249 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 01:05:59.272253 | orchestrator | Wednesday 04 February 2026 01:05:56 +0000 (0:00:03.356) 0:02:49.806 **** 2026-02-04 01:05:59.272257 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.272261 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.272265 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.272269 | orchestrator | 2026-02-04 01:05:59.272273 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:05:59.272277 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 01:05:59.272286 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 01:05:59.272293 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 01:05:59.272297 | orchestrator | 2026-02-04 01:05:59.272301 | orchestrator | 2026-02-04 01:05:59.272305 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:05:59.272309 | orchestrator | Wednesday 04 February 2026 01:05:57 +0000 (0:00:00.875) 0:02:50.682 **** 2026-02-04 01:05:59.272313 | orchestrator | =============================================================================== 2026-02-04 01:05:59.272316 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.13s 2026-02-04 01:05:59.272320 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.06s 2026-02-04 01:05:59.272324 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.37s 2026-02-04 01:05:59.272328 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.86s 2026-02-04 01:05:59.272332 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.87s 2026-02-04 01:05:59.272336 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.40s 2026-02-04 01:05:59.272342 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 10.02s 2026-02-04 01:05:59.272346 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.90s 2026-02-04 01:05:59.272350 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.90s 2026-02-04 01:05:59.272354 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.40s 2026-02-04 01:05:59.272358 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 4.02s 2026-02-04 01:05:59.272362 | orchestrator | keystone : Creating default user role ----------------------------------- 3.36s 2026-02-04 01:05:59.272366 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.31s 2026-02-04 01:05:59.272369 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.09s 2026-02-04 01:05:59.272373 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.82s 2026-02-04 01:05:59.272377 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.46s 2026-02-04 01:05:59.272381 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.32s 2026-02-04 01:05:59.272385 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.18s 2026-02-04 01:05:59.272388 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.85s 2026-02-04 01:05:59.272392 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.78s 2026-02-04 01:05:59.272396 | orchestrator | 2026-02-04 01:05:59 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:05:59.272400 | orchestrator | 2026-02-04 01:05:59 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:05:59.272404 | orchestrator | 2026-02-04 01:05:59 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:05:59.272408 | orchestrator | 2026-02-04 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:02.319301 | orchestrator | 2026-02-04 01:06:02 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:02.321504 | orchestrator | 2026-02-04 01:06:02 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:06:02.324524 | orchestrator | 2026-02-04 01:06:02 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:02.326989 | orchestrator | 2026-02-04 01:06:02 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:02.329104 | orchestrator | 2026-02-04 01:06:02 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:02.329217 | orchestrator | 2026-02-04 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:05.372041 | orchestrator | 2026-02-04 01:06:05 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:05.372760 | orchestrator | 2026-02-04 01:06:05 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:06:05.373896 | orchestrator | 2026-02-04 01:06:05 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:05.374646 | orchestrator | 2026-02-04 01:06:05 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:05.375627 | orchestrator | 2026-02-04 01:06:05 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:05.375659 | orchestrator | 2026-02-04 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:08.423153 | orchestrator | 2026-02-04 01:06:08 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:08.424093 | orchestrator | 2026-02-04 01:06:08 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:06:08.425357 | orchestrator | 2026-02-04 01:06:08 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:08.426377 | orchestrator | 2026-02-04 01:06:08 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:08.428249 | orchestrator | 2026-02-04 01:06:08 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:08.428274 | orchestrator | 2026-02-04 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:11.531155 | orchestrator | 2026-02-04 01:06:11 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:11.534305 | orchestrator | 2026-02-04 01:06:11 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:06:11.537808 | orchestrator | 2026-02-04 01:06:11 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:11.544867 | orchestrator | 2026-02-04 01:06:11 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:11.544923 | orchestrator | 2026-02-04 01:06:11 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:11.544959 | orchestrator | 2026-02-04 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:14.592052 | orchestrator | 2026-02-04 01:06:14 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:14.593486 | orchestrator | 2026-02-04 01:06:14 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:06:14.595407 | orchestrator | 2026-02-04 01:06:14 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:14.596306 | orchestrator | 2026-02-04 01:06:14 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:14.599312 | orchestrator | 2026-02-04 01:06:14 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:14.599364 | orchestrator | 2026-02-04 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:17.637196 | orchestrator | 2026-02-04 01:06:17 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:17.637295 | orchestrator | 2026-02-04 01:06:17 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:06:17.637307 | orchestrator | 2026-02-04 01:06:17 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:17.640475 | orchestrator | 2026-02-04 01:06:17 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:17.651643 | orchestrator | 2026-02-04 01:06:17 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:17.651717 | orchestrator | 2026-02-04 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:21.416568 | orchestrator | 2026-02-04 01:06:20 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:21.416616 | orchestrator | 2026-02-04 01:06:20 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state STARTED 2026-02-04 01:06:21.416621 | orchestrator | 2026-02-04 01:06:20 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:21.416626 | orchestrator | 2026-02-04 01:06:20 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:21.416630 | orchestrator | 2026-02-04 01:06:20 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:21.416634 | orchestrator | 2026-02-04 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:23.708911 | orchestrator | 2026-02-04 01:06:23 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:23.709231 | orchestrator | 2026-02-04 01:06:23 | INFO  | Task c46db457-d9f3-4ecd-b9c7-50a80446cd77 is in state SUCCESS 2026-02-04 01:06:23.710287 | orchestrator | 2026-02-04 01:06:23 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:23.711166 | orchestrator | 2026-02-04 01:06:23 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:23.712063 | orchestrator | 2026-02-04 01:06:23 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:23.712205 | orchestrator | 2026-02-04 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:26.745694 | orchestrator | 2026-02-04 01:06:26 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:26.745761 | orchestrator | 2026-02-04 01:06:26 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:06:26.745767 | orchestrator | 2026-02-04 01:06:26 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:26.745773 | orchestrator | 2026-02-04 01:06:26 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:26.745779 | orchestrator | 2026-02-04 01:06:26 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:26.745788 | orchestrator | 2026-02-04 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:29.777494 | orchestrator | 2026-02-04 01:06:29 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:29.779721 | orchestrator | 2026-02-04 01:06:29 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:06:29.781316 | orchestrator | 2026-02-04 01:06:29 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:29.783127 | orchestrator | 2026-02-04 01:06:29 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:29.785846 | orchestrator | 2026-02-04 01:06:29 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:29.785900 | orchestrator | 2026-02-04 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:32.830481 | orchestrator | 2026-02-04 01:06:32 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:32.833151 | orchestrator | 2026-02-04 01:06:32 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:06:32.835891 | orchestrator | 2026-02-04 01:06:32 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:32.837562 | orchestrator | 2026-02-04 01:06:32 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:32.839134 | orchestrator | 2026-02-04 01:06:32 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:32.839395 | orchestrator | 2026-02-04 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:35.884136 | orchestrator | 2026-02-04 01:06:35 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:35.885274 | orchestrator | 2026-02-04 01:06:35 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:06:35.887434 | orchestrator | 2026-02-04 01:06:35 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:35.889795 | orchestrator | 2026-02-04 01:06:35 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:35.891553 | orchestrator | 2026-02-04 01:06:35 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:35.892134 | orchestrator | 2026-02-04 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:38.931285 | orchestrator | 2026-02-04 01:06:38 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:38.932711 | orchestrator | 2026-02-04 01:06:38 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:06:38.933741 | orchestrator | 2026-02-04 01:06:38 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:38.935399 | orchestrator | 2026-02-04 01:06:38 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:38.936348 | orchestrator | 2026-02-04 01:06:38 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:38.936541 | orchestrator | 2026-02-04 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:41.982660 | orchestrator | 2026-02-04 01:06:41 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:41.984485 | orchestrator | 2026-02-04 01:06:41 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:06:41.985694 | orchestrator | 2026-02-04 01:06:41 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:41.987947 | orchestrator | 2026-02-04 01:06:41 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:41.988619 | orchestrator | 2026-02-04 01:06:41 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:41.988645 | orchestrator | 2026-02-04 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:45.021478 | orchestrator | 2026-02-04 01:06:45 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:45.022233 | orchestrator | 2026-02-04 01:06:45 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:06:45.022906 | orchestrator | 2026-02-04 01:06:45 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:45.024030 | orchestrator | 2026-02-04 01:06:45 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:45.025115 | orchestrator | 2026-02-04 01:06:45 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:45.025162 | orchestrator | 2026-02-04 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:48.099821 | orchestrator | 2026-02-04 01:06:48 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:48.101300 | orchestrator | 2026-02-04 01:06:48 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:06:48.102478 | orchestrator | 2026-02-04 01:06:48 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:48.104345 | orchestrator | 2026-02-04 01:06:48 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:48.105317 | orchestrator | 2026-02-04 01:06:48 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:48.105360 | orchestrator | 2026-02-04 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:51.131746 | orchestrator | 2026-02-04 01:06:51 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:51.132446 | orchestrator | 2026-02-04 01:06:51 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:06:51.133175 | orchestrator | 2026-02-04 01:06:51 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:51.133881 | orchestrator | 2026-02-04 01:06:51 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:51.134605 | orchestrator | 2026-02-04 01:06:51 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:51.134639 | orchestrator | 2026-02-04 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:54.162410 | orchestrator | 2026-02-04 01:06:54 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:54.163145 | orchestrator | 2026-02-04 01:06:54 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:06:54.166733 | orchestrator | 2026-02-04 01:06:54 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:54.167681 | orchestrator | 2026-02-04 01:06:54 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:54.169699 | orchestrator | 2026-02-04 01:06:54 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:54.169759 | orchestrator | 2026-02-04 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:57.226078 | orchestrator | 2026-02-04 01:06:57 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:06:57.228299 | orchestrator | 2026-02-04 01:06:57 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:06:57.232244 | orchestrator | 2026-02-04 01:06:57 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:06:57.233196 | orchestrator | 2026-02-04 01:06:57 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:06:57.234829 | orchestrator | 2026-02-04 01:06:57 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:06:57.234856 | orchestrator | 2026-02-04 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:00.272344 | orchestrator | 2026-02-04 01:07:00 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:00.275615 | orchestrator | 2026-02-04 01:07:00 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:00.276223 | orchestrator | 2026-02-04 01:07:00 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:07:00.277054 | orchestrator | 2026-02-04 01:07:00 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:00.278010 | orchestrator | 2026-02-04 01:07:00 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:00.278081 | orchestrator | 2026-02-04 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:03.319844 | orchestrator | 2026-02-04 01:07:03 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:03.320319 | orchestrator | 2026-02-04 01:07:03 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:03.321171 | orchestrator | 2026-02-04 01:07:03 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:07:03.322271 | orchestrator | 2026-02-04 01:07:03 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:03.323263 | orchestrator | 2026-02-04 01:07:03 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:03.323308 | orchestrator | 2026-02-04 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:06.364544 | orchestrator | 2026-02-04 01:07:06 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:06.366893 | orchestrator | 2026-02-04 01:07:06 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:06.369458 | orchestrator | 2026-02-04 01:07:06 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state STARTED 2026-02-04 01:07:06.370956 | orchestrator | 2026-02-04 01:07:06 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:06.373289 | orchestrator | 2026-02-04 01:07:06 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:06.373328 | orchestrator | 2026-02-04 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:09.412452 | orchestrator | 2026-02-04 01:07:09 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:09.413761 | orchestrator | 2026-02-04 01:07:09 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:09.413803 | orchestrator | 2026-02-04 01:07:09 | INFO  | Task 71e57857-76bf-4379-b8ce-cf5017a20895 is in state SUCCESS 2026-02-04 01:07:09.415280 | orchestrator | 2026-02-04 01:07:09.415318 | orchestrator | 2026-02-04 01:07:09.415323 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:07:09.415328 | orchestrator | 2026-02-04 01:07:09.415332 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:07:09.415336 | orchestrator | Wednesday 04 February 2026 01:05:43 +0000 (0:00:00.371) 0:00:00.371 **** 2026-02-04 01:07:09.415340 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:07:09.415345 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:07:09.415350 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:07:09.415354 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:07:09.415357 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:07:09.415361 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:07:09.415365 | orchestrator | ok: [testbed-manager] 2026-02-04 01:07:09.415369 | orchestrator | 2026-02-04 01:07:09.415373 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:07:09.415377 | orchestrator | Wednesday 04 February 2026 01:05:45 +0000 (0:00:01.430) 0:00:01.802 **** 2026-02-04 01:07:09.415381 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-04 01:07:09.415385 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-04 01:07:09.415389 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-04 01:07:09.415393 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-04 01:07:09.415397 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-04 01:07:09.415415 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-04 01:07:09.415420 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-04 01:07:09.415424 | orchestrator | 2026-02-04 01:07:09.415428 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-04 01:07:09.415431 | orchestrator | 2026-02-04 01:07:09.415435 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-04 01:07:09.415439 | orchestrator | Wednesday 04 February 2026 01:05:47 +0000 (0:00:02.034) 0:00:03.837 **** 2026-02-04 01:07:09.415443 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-04 01:07:09.415448 | orchestrator | 2026-02-04 01:07:09.415452 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-04 01:07:09.415456 | orchestrator | Wednesday 04 February 2026 01:05:49 +0000 (0:00:02.079) 0:00:05.916 **** 2026-02-04 01:07:09.415459 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-02-04 01:07:09.415463 | orchestrator | 2026-02-04 01:07:09.415467 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-04 01:07:09.415471 | orchestrator | Wednesday 04 February 2026 01:05:52 +0000 (0:00:03.651) 0:00:09.568 **** 2026-02-04 01:07:09.415475 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-04 01:07:09.415480 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-04 01:07:09.415486 | orchestrator | 2026-02-04 01:07:09.415493 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-04 01:07:09.415501 | orchestrator | Wednesday 04 February 2026 01:05:59 +0000 (0:00:06.956) 0:00:16.524 **** 2026-02-04 01:07:09.415510 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:07:09.415516 | orchestrator | 2026-02-04 01:07:09.415521 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-04 01:07:09.415528 | orchestrator | Wednesday 04 February 2026 01:06:02 +0000 (0:00:03.087) 0:00:19.612 **** 2026-02-04 01:07:09.415533 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-02-04 01:07:09.415539 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:07:09.415545 | orchestrator | 2026-02-04 01:07:09.415551 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-04 01:07:09.415557 | orchestrator | Wednesday 04 February 2026 01:06:07 +0000 (0:00:04.211) 0:00:23.823 **** 2026-02-04 01:07:09.415562 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:07:09.415568 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-02-04 01:07:09.415574 | orchestrator | 2026-02-04 01:07:09.415580 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-04 01:07:09.415586 | orchestrator | Wednesday 04 February 2026 01:06:14 +0000 (0:00:06.938) 0:00:30.762 **** 2026-02-04 01:07:09.415591 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-02-04 01:07:09.415597 | orchestrator | 2026-02-04 01:07:09.415611 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:07:09.415617 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:07:09.415624 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:07:09.415630 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:07:09.415637 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:07:09.415643 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:07:09.415665 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:07:09.415672 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:07:09.415678 | orchestrator | 2026-02-04 01:07:09.415689 | orchestrator | 2026-02-04 01:07:09.415695 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:07:09.415701 | orchestrator | Wednesday 04 February 2026 01:06:21 +0000 (0:00:07.628) 0:00:38.391 **** 2026-02-04 01:07:09.415708 | orchestrator | =============================================================================== 2026-02-04 01:07:09.415714 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 7.63s 2026-02-04 01:07:09.415719 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.96s 2026-02-04 01:07:09.415725 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.94s 2026-02-04 01:07:09.415731 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.21s 2026-02-04 01:07:09.415737 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.65s 2026-02-04 01:07:09.415743 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.09s 2026-02-04 01:07:09.415748 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.08s 2026-02-04 01:07:09.415754 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.03s 2026-02-04 01:07:09.415760 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.43s 2026-02-04 01:07:09.415766 | orchestrator | 2026-02-04 01:07:09.415772 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 01:07:09.415778 | orchestrator | 2.16.14 2026-02-04 01:07:09.415785 | orchestrator | 2026-02-04 01:07:09.415791 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-04 01:07:09.415797 | orchestrator | 2026-02-04 01:07:09.415803 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-04 01:07:09.415810 | orchestrator | Wednesday 04 February 2026 01:05:34 +0000 (0:00:00.303) 0:00:00.303 **** 2026-02-04 01:07:09.415816 | orchestrator | changed: [testbed-manager] 2026-02-04 01:07:09.415821 | orchestrator | 2026-02-04 01:07:09.415827 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-04 01:07:09.415832 | orchestrator | Wednesday 04 February 2026 01:05:36 +0000 (0:00:01.799) 0:00:02.102 **** 2026-02-04 01:07:09.415838 | orchestrator | changed: [testbed-manager] 2026-02-04 01:07:09.415843 | orchestrator | 2026-02-04 01:07:09.415849 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-04 01:07:09.415854 | orchestrator | Wednesday 04 February 2026 01:05:37 +0000 (0:00:01.157) 0:00:03.260 **** 2026-02-04 01:07:09.415860 | orchestrator | changed: [testbed-manager] 2026-02-04 01:07:09.415865 | orchestrator | 2026-02-04 01:07:09.415871 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-04 01:07:09.415877 | orchestrator | Wednesday 04 February 2026 01:05:38 +0000 (0:00:01.162) 0:00:04.422 **** 2026-02-04 01:07:09.415883 | orchestrator | changed: [testbed-manager] 2026-02-04 01:07:09.415888 | orchestrator | 2026-02-04 01:07:09.415895 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-04 01:07:09.415902 | orchestrator | Wednesday 04 February 2026 01:05:40 +0000 (0:00:02.298) 0:00:06.721 **** 2026-02-04 01:07:09.415909 | orchestrator | changed: [testbed-manager] 2026-02-04 01:07:09.415916 | orchestrator | 2026-02-04 01:07:09.415923 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-04 01:07:09.415930 | orchestrator | Wednesday 04 February 2026 01:05:41 +0000 (0:00:01.177) 0:00:07.899 **** 2026-02-04 01:07:09.415936 | orchestrator | changed: [testbed-manager] 2026-02-04 01:07:09.415950 | orchestrator | 2026-02-04 01:07:09.415958 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-04 01:07:09.415964 | orchestrator | Wednesday 04 February 2026 01:05:43 +0000 (0:00:01.168) 0:00:09.068 **** 2026-02-04 01:07:09.416000 | orchestrator | changed: [testbed-manager] 2026-02-04 01:07:09.416008 | orchestrator | 2026-02-04 01:07:09.416014 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-04 01:07:09.416021 | orchestrator | Wednesday 04 February 2026 01:05:45 +0000 (0:00:02.240) 0:00:11.308 **** 2026-02-04 01:07:09.416027 | orchestrator | changed: [testbed-manager] 2026-02-04 01:07:09.416034 | orchestrator | 2026-02-04 01:07:09.416040 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-04 01:07:09.416046 | orchestrator | Wednesday 04 February 2026 01:05:47 +0000 (0:00:02.210) 0:00:13.519 **** 2026-02-04 01:07:09.416053 | orchestrator | changed: [testbed-manager] 2026-02-04 01:07:09.416059 | orchestrator | 2026-02-04 01:07:09.416066 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-04 01:07:09.416077 | orchestrator | Wednesday 04 February 2026 01:06:41 +0000 (0:00:54.034) 0:01:07.553 **** 2026-02-04 01:07:09.416084 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:07:09.416090 | orchestrator | 2026-02-04 01:07:09.416097 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-04 01:07:09.416104 | orchestrator | 2026-02-04 01:07:09.416111 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-04 01:07:09.416117 | orchestrator | Wednesday 04 February 2026 01:06:41 +0000 (0:00:00.196) 0:01:07.750 **** 2026-02-04 01:07:09.416121 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:07:09.416126 | orchestrator | 2026-02-04 01:07:09.416131 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-04 01:07:09.416135 | orchestrator | 2026-02-04 01:07:09.416140 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-04 01:07:09.416145 | orchestrator | Wednesday 04 February 2026 01:06:53 +0000 (0:00:11.595) 0:01:19.345 **** 2026-02-04 01:07:09.416150 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:07:09.416154 | orchestrator | 2026-02-04 01:07:09.416159 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-04 01:07:09.416164 | orchestrator | 2026-02-04 01:07:09.416169 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-04 01:07:09.416183 | orchestrator | Wednesday 04 February 2026 01:07:04 +0000 (0:00:11.605) 0:01:30.951 **** 2026-02-04 01:07:09.416256 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:07:09.416261 | orchestrator | 2026-02-04 01:07:09.416265 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:07:09.416269 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:07:09.416274 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:07:09.416278 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:07:09.416282 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:07:09.416285 | orchestrator | 2026-02-04 01:07:09.416289 | orchestrator | 2026-02-04 01:07:09.416293 | orchestrator | 2026-02-04 01:07:09.416297 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:07:09.416301 | orchestrator | Wednesday 04 February 2026 01:07:06 +0000 (0:00:01.155) 0:01:32.106 **** 2026-02-04 01:07:09.416305 | orchestrator | =============================================================================== 2026-02-04 01:07:09.416309 | orchestrator | Create admin user ------------------------------------------------------ 54.03s 2026-02-04 01:07:09.416313 | orchestrator | Restart ceph manager service ------------------------------------------- 24.36s 2026-02-04 01:07:09.416321 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 2.30s 2026-02-04 01:07:09.416325 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.24s 2026-02-04 01:07:09.416329 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 2.21s 2026-02-04 01:07:09.416333 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.80s 2026-02-04 01:07:09.416337 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.18s 2026-02-04 01:07:09.416341 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.17s 2026-02-04 01:07:09.416344 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.16s 2026-02-04 01:07:09.416349 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.16s 2026-02-04 01:07:09.416355 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.20s 2026-02-04 01:07:09.416365 | orchestrator | 2026-02-04 01:07:09 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:09.418399 | orchestrator | 2026-02-04 01:07:09 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:09.418435 | orchestrator | 2026-02-04 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:12.469783 | orchestrator | 2026-02-04 01:07:12 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:12.470656 | orchestrator | 2026-02-04 01:07:12 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:12.471564 | orchestrator | 2026-02-04 01:07:12 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:12.472833 | orchestrator | 2026-02-04 01:07:12 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:12.472866 | orchestrator | 2026-02-04 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:15.518457 | orchestrator | 2026-02-04 01:07:15 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:15.519706 | orchestrator | 2026-02-04 01:07:15 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:15.522565 | orchestrator | 2026-02-04 01:07:15 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:15.523682 | orchestrator | 2026-02-04 01:07:15 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:15.523724 | orchestrator | 2026-02-04 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:18.662169 | orchestrator | 2026-02-04 01:07:18 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:18.662674 | orchestrator | 2026-02-04 01:07:18 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:18.663606 | orchestrator | 2026-02-04 01:07:18 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:18.664423 | orchestrator | 2026-02-04 01:07:18 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:18.665025 | orchestrator | 2026-02-04 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:21.699557 | orchestrator | 2026-02-04 01:07:21 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:21.700430 | orchestrator | 2026-02-04 01:07:21 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:21.700758 | orchestrator | 2026-02-04 01:07:21 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:21.701717 | orchestrator | 2026-02-04 01:07:21 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:21.701781 | orchestrator | 2026-02-04 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:24.730873 | orchestrator | 2026-02-04 01:07:24 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:24.731160 | orchestrator | 2026-02-04 01:07:24 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:24.732144 | orchestrator | 2026-02-04 01:07:24 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:24.732525 | orchestrator | 2026-02-04 01:07:24 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:24.732558 | orchestrator | 2026-02-04 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:27.763785 | orchestrator | 2026-02-04 01:07:27 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:27.763946 | orchestrator | 2026-02-04 01:07:27 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:27.765823 | orchestrator | 2026-02-04 01:07:27 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:27.766227 | orchestrator | 2026-02-04 01:07:27 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:27.766246 | orchestrator | 2026-02-04 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:30.804757 | orchestrator | 2026-02-04 01:07:30 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:30.806788 | orchestrator | 2026-02-04 01:07:30 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:30.809285 | orchestrator | 2026-02-04 01:07:30 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:30.811949 | orchestrator | 2026-02-04 01:07:30 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:30.812016 | orchestrator | 2026-02-04 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:33.901347 | orchestrator | 2026-02-04 01:07:33 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:33.901872 | orchestrator | 2026-02-04 01:07:33 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:33.902719 | orchestrator | 2026-02-04 01:07:33 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:33.903466 | orchestrator | 2026-02-04 01:07:33 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:33.903496 | orchestrator | 2026-02-04 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:36.946132 | orchestrator | 2026-02-04 01:07:36 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:36.946604 | orchestrator | 2026-02-04 01:07:36 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:36.947511 | orchestrator | 2026-02-04 01:07:36 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:36.949075 | orchestrator | 2026-02-04 01:07:36 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:36.949127 | orchestrator | 2026-02-04 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:39.994838 | orchestrator | 2026-02-04 01:07:39 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:39.995716 | orchestrator | 2026-02-04 01:07:39 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:39.996210 | orchestrator | 2026-02-04 01:07:39 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:39.997725 | orchestrator | 2026-02-04 01:07:40 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:39.997773 | orchestrator | 2026-02-04 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:43.045305 | orchestrator | 2026-02-04 01:07:43 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:43.046857 | orchestrator | 2026-02-04 01:07:43 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:43.049611 | orchestrator | 2026-02-04 01:07:43 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:43.050348 | orchestrator | 2026-02-04 01:07:43 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:43.050673 | orchestrator | 2026-02-04 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:46.075324 | orchestrator | 2026-02-04 01:07:46 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:46.075623 | orchestrator | 2026-02-04 01:07:46 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:46.076664 | orchestrator | 2026-02-04 01:07:46 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:46.077368 | orchestrator | 2026-02-04 01:07:46 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:46.077462 | orchestrator | 2026-02-04 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:49.120480 | orchestrator | 2026-02-04 01:07:49 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:49.121550 | orchestrator | 2026-02-04 01:07:49 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:49.123943 | orchestrator | 2026-02-04 01:07:49 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:49.125304 | orchestrator | 2026-02-04 01:07:49 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:49.125355 | orchestrator | 2026-02-04 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:52.174850 | orchestrator | 2026-02-04 01:07:52 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:52.174945 | orchestrator | 2026-02-04 01:07:52 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:52.174956 | orchestrator | 2026-02-04 01:07:52 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:52.174963 | orchestrator | 2026-02-04 01:07:52 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:52.174971 | orchestrator | 2026-02-04 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:55.214637 | orchestrator | 2026-02-04 01:07:55 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:55.217370 | orchestrator | 2026-02-04 01:07:55 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:55.222362 | orchestrator | 2026-02-04 01:07:55 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:55.224631 | orchestrator | 2026-02-04 01:07:55 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:55.224777 | orchestrator | 2026-02-04 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:58.268105 | orchestrator | 2026-02-04 01:07:58 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:07:58.269142 | orchestrator | 2026-02-04 01:07:58 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:07:58.270224 | orchestrator | 2026-02-04 01:07:58 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:07:58.271398 | orchestrator | 2026-02-04 01:07:58 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:07:58.271473 | orchestrator | 2026-02-04 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:01.315707 | orchestrator | 2026-02-04 01:08:01 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:01.317334 | orchestrator | 2026-02-04 01:08:01 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:01.318812 | orchestrator | 2026-02-04 01:08:01 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:01.320096 | orchestrator | 2026-02-04 01:08:01 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:01.320129 | orchestrator | 2026-02-04 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:04.363539 | orchestrator | 2026-02-04 01:08:04 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:04.366119 | orchestrator | 2026-02-04 01:08:04 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:04.368155 | orchestrator | 2026-02-04 01:08:04 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:04.371760 | orchestrator | 2026-02-04 01:08:04 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:04.371822 | orchestrator | 2026-02-04 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:07.421267 | orchestrator | 2026-02-04 01:08:07 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:07.423645 | orchestrator | 2026-02-04 01:08:07 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:07.425955 | orchestrator | 2026-02-04 01:08:07 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:07.427571 | orchestrator | 2026-02-04 01:08:07 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:07.427630 | orchestrator | 2026-02-04 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:10.471422 | orchestrator | 2026-02-04 01:08:10 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:10.472235 | orchestrator | 2026-02-04 01:08:10 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:10.473396 | orchestrator | 2026-02-04 01:08:10 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:10.474276 | orchestrator | 2026-02-04 01:08:10 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:10.474332 | orchestrator | 2026-02-04 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:13.512734 | orchestrator | 2026-02-04 01:08:13 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:13.516544 | orchestrator | 2026-02-04 01:08:13 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:13.518118 | orchestrator | 2026-02-04 01:08:13 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:13.521357 | orchestrator | 2026-02-04 01:08:13 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:13.521481 | orchestrator | 2026-02-04 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:16.562635 | orchestrator | 2026-02-04 01:08:16 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:16.563181 | orchestrator | 2026-02-04 01:08:16 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:16.564334 | orchestrator | 2026-02-04 01:08:16 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:16.567271 | orchestrator | 2026-02-04 01:08:16 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:16.567321 | orchestrator | 2026-02-04 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:19.746128 | orchestrator | 2026-02-04 01:08:19 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:19.747525 | orchestrator | 2026-02-04 01:08:19 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:19.749894 | orchestrator | 2026-02-04 01:08:19 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:19.751566 | orchestrator | 2026-02-04 01:08:19 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:19.751827 | orchestrator | 2026-02-04 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:22.787419 | orchestrator | 2026-02-04 01:08:22 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:22.788182 | orchestrator | 2026-02-04 01:08:22 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:22.789162 | orchestrator | 2026-02-04 01:08:22 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:22.790169 | orchestrator | 2026-02-04 01:08:22 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:22.790195 | orchestrator | 2026-02-04 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:25.850668 | orchestrator | 2026-02-04 01:08:25 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:25.853392 | orchestrator | 2026-02-04 01:08:25 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:25.855969 | orchestrator | 2026-02-04 01:08:25 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:25.858567 | orchestrator | 2026-02-04 01:08:25 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:25.858646 | orchestrator | 2026-02-04 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:28.898920 | orchestrator | 2026-02-04 01:08:28 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:28.900992 | orchestrator | 2026-02-04 01:08:28 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:28.903425 | orchestrator | 2026-02-04 01:08:28 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:28.905780 | orchestrator | 2026-02-04 01:08:28 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:28.906195 | orchestrator | 2026-02-04 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:31.945702 | orchestrator | 2026-02-04 01:08:31 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:31.949996 | orchestrator | 2026-02-04 01:08:31 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:31.951600 | orchestrator | 2026-02-04 01:08:31 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:31.952252 | orchestrator | 2026-02-04 01:08:31 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:31.952478 | orchestrator | 2026-02-04 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:34.988070 | orchestrator | 2026-02-04 01:08:34 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:34.988650 | orchestrator | 2026-02-04 01:08:34 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:34.989778 | orchestrator | 2026-02-04 01:08:34 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:34.990811 | orchestrator | 2026-02-04 01:08:34 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:34.990834 | orchestrator | 2026-02-04 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:38.058315 | orchestrator | 2026-02-04 01:08:38 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:38.058763 | orchestrator | 2026-02-04 01:08:38 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:38.059891 | orchestrator | 2026-02-04 01:08:38 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:38.061185 | orchestrator | 2026-02-04 01:08:38 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:38.061377 | orchestrator | 2026-02-04 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:41.109793 | orchestrator | 2026-02-04 01:08:41 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:41.110621 | orchestrator | 2026-02-04 01:08:41 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:41.112627 | orchestrator | 2026-02-04 01:08:41 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:41.112655 | orchestrator | 2026-02-04 01:08:41 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:41.112661 | orchestrator | 2026-02-04 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:44.154488 | orchestrator | 2026-02-04 01:08:44 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:44.158625 | orchestrator | 2026-02-04 01:08:44 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:44.159580 | orchestrator | 2026-02-04 01:08:44 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:44.160489 | orchestrator | 2026-02-04 01:08:44 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:44.163795 | orchestrator | 2026-02-04 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:47.197901 | orchestrator | 2026-02-04 01:08:47 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:47.198563 | orchestrator | 2026-02-04 01:08:47 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:47.200126 | orchestrator | 2026-02-04 01:08:47 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:47.200167 | orchestrator | 2026-02-04 01:08:47 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:47.200175 | orchestrator | 2026-02-04 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:50.240338 | orchestrator | 2026-02-04 01:08:50 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:50.240420 | orchestrator | 2026-02-04 01:08:50 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:50.240432 | orchestrator | 2026-02-04 01:08:50 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:50.241274 | orchestrator | 2026-02-04 01:08:50 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:50.241304 | orchestrator | 2026-02-04 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:53.280523 | orchestrator | 2026-02-04 01:08:53 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:53.284117 | orchestrator | 2026-02-04 01:08:53 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:53.285988 | orchestrator | 2026-02-04 01:08:53 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:53.288938 | orchestrator | 2026-02-04 01:08:53 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:53.288987 | orchestrator | 2026-02-04 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:56.338977 | orchestrator | 2026-02-04 01:08:56 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:56.343667 | orchestrator | 2026-02-04 01:08:56 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:56.345632 | orchestrator | 2026-02-04 01:08:56 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:56.347411 | orchestrator | 2026-02-04 01:08:56 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:56.347448 | orchestrator | 2026-02-04 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:59.406932 | orchestrator | 2026-02-04 01:08:59 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:08:59.408674 | orchestrator | 2026-02-04 01:08:59 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:08:59.410733 | orchestrator | 2026-02-04 01:08:59 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:08:59.412371 | orchestrator | 2026-02-04 01:08:59 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:08:59.412600 | orchestrator | 2026-02-04 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:02.447941 | orchestrator | 2026-02-04 01:09:02 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:09:02.450189 | orchestrator | 2026-02-04 01:09:02 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:02.450235 | orchestrator | 2026-02-04 01:09:02 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:09:02.451381 | orchestrator | 2026-02-04 01:09:02 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:09:02.451416 | orchestrator | 2026-02-04 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:05.513137 | orchestrator | 2026-02-04 01:09:05 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:09:05.514343 | orchestrator | 2026-02-04 01:09:05 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:05.515541 | orchestrator | 2026-02-04 01:09:05 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:09:05.517922 | orchestrator | 2026-02-04 01:09:05 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:09:05.517993 | orchestrator | 2026-02-04 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:08.570183 | orchestrator | 2026-02-04 01:09:08 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:09:08.571732 | orchestrator | 2026-02-04 01:09:08 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:08.573065 | orchestrator | 2026-02-04 01:09:08 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:09:08.574010 | orchestrator | 2026-02-04 01:09:08 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:09:08.574066 | orchestrator | 2026-02-04 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:11.630639 | orchestrator | 2026-02-04 01:09:11 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:09:11.630695 | orchestrator | 2026-02-04 01:09:11 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:11.631688 | orchestrator | 2026-02-04 01:09:11 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state STARTED 2026-02-04 01:09:11.636900 | orchestrator | 2026-02-04 01:09:11 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:09:11.636941 | orchestrator | 2026-02-04 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:14.674621 | orchestrator | 2026-02-04 01:09:14 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:09:14.676461 | orchestrator | 2026-02-04 01:09:14 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:14.678500 | orchestrator | 2026-02-04 01:09:14 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:14.680649 | orchestrator | 2026-02-04 01:09:14 | INFO  | Task 5087ee4a-4457-465e-b36a-935875cc98ff is in state SUCCESS 2026-02-04 01:09:14.682123 | orchestrator | 2026-02-04 01:09:14.682162 | orchestrator | 2026-02-04 01:09:14.682168 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:09:14.682172 | orchestrator | 2026-02-04 01:09:14.682176 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:09:14.682181 | orchestrator | Wednesday 04 February 2026 01:05:43 +0000 (0:00:00.304) 0:00:00.304 **** 2026-02-04 01:09:14.682185 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:09:14.682190 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:09:14.682194 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:09:14.682198 | orchestrator | 2026-02-04 01:09:14.682202 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:09:14.682206 | orchestrator | Wednesday 04 February 2026 01:05:43 +0000 (0:00:00.401) 0:00:00.706 **** 2026-02-04 01:09:14.682210 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-04 01:09:14.682214 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-04 01:09:14.682218 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-04 01:09:14.682222 | orchestrator | 2026-02-04 01:09:14.682226 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-04 01:09:14.682230 | orchestrator | 2026-02-04 01:09:14.682234 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 01:09:14.682238 | orchestrator | Wednesday 04 February 2026 01:05:44 +0000 (0:00:00.614) 0:00:01.320 **** 2026-02-04 01:09:14.682241 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:09:14.682246 | orchestrator | 2026-02-04 01:09:14.682250 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-04 01:09:14.682254 | orchestrator | Wednesday 04 February 2026 01:05:45 +0000 (0:00:01.308) 0:00:02.628 **** 2026-02-04 01:09:14.682274 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-04 01:09:14.682278 | orchestrator | 2026-02-04 01:09:14.682289 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-04 01:09:14.682296 | orchestrator | Wednesday 04 February 2026 01:05:50 +0000 (0:00:04.859) 0:00:07.488 **** 2026-02-04 01:09:14.682301 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-04 01:09:14.682305 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-04 01:09:14.682308 | orchestrator | 2026-02-04 01:09:14.682312 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-04 01:09:14.682317 | orchestrator | Wednesday 04 February 2026 01:05:57 +0000 (0:00:06.881) 0:00:14.369 **** 2026-02-04 01:09:14.682324 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-04 01:09:14.682333 | orchestrator | 2026-02-04 01:09:14.682340 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-04 01:09:14.682347 | orchestrator | Wednesday 04 February 2026 01:06:00 +0000 (0:00:03.331) 0:00:17.701 **** 2026-02-04 01:09:14.682362 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-04 01:09:14.682369 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:09:14.682375 | orchestrator | 2026-02-04 01:09:14.682381 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-04 01:09:14.682386 | orchestrator | Wednesday 04 February 2026 01:06:04 +0000 (0:00:03.994) 0:00:21.695 **** 2026-02-04 01:09:14.682392 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:09:14.682399 | orchestrator | 2026-02-04 01:09:14.682405 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-04 01:09:14.682412 | orchestrator | Wednesday 04 February 2026 01:06:08 +0000 (0:00:04.106) 0:00:25.802 **** 2026-02-04 01:09:14.682418 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-04 01:09:14.682424 | orchestrator | 2026-02-04 01:09:14.682431 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-04 01:09:14.682437 | orchestrator | Wednesday 04 February 2026 01:06:13 +0000 (0:00:04.377) 0:00:30.179 **** 2026-02-04 01:09:14.682456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.682466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.682482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.682490 | orchestrator | 2026-02-04 01:09:14.682496 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 01:09:14.682502 | orchestrator | Wednesday 04 February 2026 01:06:23 +0000 (0:00:10.658) 0:00:40.837 **** 2026-02-04 01:09:14.682507 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:09:14.682511 | orchestrator | 2026-02-04 01:09:14.682515 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-04 01:09:14.682523 | orchestrator | Wednesday 04 February 2026 01:06:24 +0000 (0:00:00.664) 0:00:41.502 **** 2026-02-04 01:09:14.682527 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:14.682531 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:14.682537 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:14.682541 | orchestrator | 2026-02-04 01:09:14.682545 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-04 01:09:14.682549 | orchestrator | Wednesday 04 February 2026 01:06:28 +0000 (0:00:03.946) 0:00:45.449 **** 2026-02-04 01:09:14.682553 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:09:14.682557 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:09:14.682561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:09:14.682565 | orchestrator | 2026-02-04 01:09:14.682568 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-04 01:09:14.682572 | orchestrator | Wednesday 04 February 2026 01:06:30 +0000 (0:00:01.712) 0:00:47.162 **** 2026-02-04 01:09:14.682578 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:09:14.682584 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:09:14.682591 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:09:14.682596 | orchestrator | 2026-02-04 01:09:14.682603 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-04 01:09:14.682608 | orchestrator | Wednesday 04 February 2026 01:06:31 +0000 (0:00:01.298) 0:00:48.461 **** 2026-02-04 01:09:14.682613 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:09:14.682628 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:09:14.682635 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:09:14.682647 | orchestrator | 2026-02-04 01:09:14.682653 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-04 01:09:14.682660 | orchestrator | Wednesday 04 February 2026 01:06:32 +0000 (0:00:00.951) 0:00:49.413 **** 2026-02-04 01:09:14.682667 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.682674 | orchestrator | 2026-02-04 01:09:14.682681 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-04 01:09:14.682686 | orchestrator | Wednesday 04 February 2026 01:06:32 +0000 (0:00:00.148) 0:00:49.561 **** 2026-02-04 01:09:14.682690 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.682694 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:14.682698 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:14.682702 | orchestrator | 2026-02-04 01:09:14.682706 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 01:09:14.682712 | orchestrator | Wednesday 04 February 2026 01:06:33 +0000 (0:00:00.459) 0:00:50.021 **** 2026-02-04 01:09:14.682716 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:09:14.682720 | orchestrator | 2026-02-04 01:09:14.682724 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-04 01:09:14.682728 | orchestrator | Wednesday 04 February 2026 01:06:33 +0000 (0:00:00.585) 0:00:50.606 **** 2026-02-04 01:09:14.682732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.682745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.682752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.682759 | orchestrator | 2026-02-04 01:09:14.682763 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-04 01:09:14.682767 | orchestrator | Wednesday 04 February 2026 01:06:38 +0000 (0:00:04.894) 0:00:55.500 **** 2026-02-04 01:09:14.682775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:09:14.682779 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.682785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:09:14.682790 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:14.682797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:09:14.682804 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:14.682808 | orchestrator | 2026-02-04 01:09:14.682812 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-04 01:09:14.682816 | orchestrator | Wednesday 04 February 2026 01:06:44 +0000 (0:00:06.477) 0:01:01.978 **** 2026-02-04 01:09:14.682822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:09:14.682826 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.682831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:09:14.682837 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:14.682845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:09:14.682849 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:14.682853 | orchestrator | 2026-02-04 01:09:14.682857 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-04 01:09:14.682861 | orchestrator | Wednesday 04 February 2026 01:06:52 +0000 (0:00:07.062) 0:01:09.041 **** 2026-02-04 01:09:14.682865 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.682869 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:14.682873 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:14.682877 | orchestrator | 2026-02-04 01:09:14.682881 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-04 01:09:14.682886 | orchestrator | Wednesday 04 February 2026 01:06:57 +0000 (0:00:05.040) 0:01:14.081 **** 2026-02-04 01:09:14.682890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.682903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.682910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.682917 | orchestrator | 2026-02-04 01:09:14.682921 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-04 01:09:14.682925 | orchestrator | Wednesday 04 February 2026 01:07:02 +0000 (0:00:05.261) 0:01:19.343 **** 2026-02-04 01:09:14.682928 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:14.682932 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:14.682936 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:14.682942 | orchestrator | 2026-02-04 01:09:14.682950 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-04 01:09:14.682959 | orchestrator | Wednesday 04 February 2026 01:07:10 +0000 (0:00:08.268) 0:01:27.611 **** 2026-02-04 01:09:14.682965 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:14.682971 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.682977 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:14.682983 | orchestrator | 2026-02-04 01:09:14.682989 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-04 01:09:14.682994 | orchestrator | Wednesday 04 February 2026 01:07:17 +0000 (0:00:06.457) 0:01:34.069 **** 2026-02-04 01:09:14.683001 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:14.683007 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.683014 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:14.683020 | orchestrator | 2026-02-04 01:09:14.683041 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-04 01:09:14.683048 | orchestrator | Wednesday 04 February 2026 01:07:24 +0000 (0:00:07.267) 0:01:41.337 **** 2026-02-04 01:09:14.683055 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.683103 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:14.683109 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:14.683113 | orchestrator | 2026-02-04 01:09:14.683117 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-04 01:09:14.683121 | orchestrator | Wednesday 04 February 2026 01:07:30 +0000 (0:00:06.509) 0:01:47.846 **** 2026-02-04 01:09:14.683125 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:14.683129 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.683133 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:14.683137 | orchestrator | 2026-02-04 01:09:14.683140 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-04 01:09:14.683144 | orchestrator | Wednesday 04 February 2026 01:07:37 +0000 (0:00:06.429) 0:01:54.276 **** 2026-02-04 01:09:14.683148 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.683152 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:14.683156 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:14.683160 | orchestrator | 2026-02-04 01:09:14.683164 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-04 01:09:14.683168 | orchestrator | Wednesday 04 February 2026 01:07:37 +0000 (0:00:00.609) 0:01:54.886 **** 2026-02-04 01:09:14.683172 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-04 01:09:14.683176 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:14.683180 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-04 01:09:14.683188 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.683192 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-04 01:09:14.683196 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:14.683200 | orchestrator | 2026-02-04 01:09:14.683203 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-04 01:09:14.683207 | orchestrator | Wednesday 04 February 2026 01:07:43 +0000 (0:00:05.745) 0:02:00.631 **** 2026-02-04 01:09:14.683211 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:14.683215 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:14.683219 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:14.683223 | orchestrator | 2026-02-04 01:09:14.683227 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-04 01:09:14.683231 | orchestrator | Wednesday 04 February 2026 01:07:50 +0000 (0:00:06.647) 0:02:07.279 **** 2026-02-04 01:09:14.683238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.683247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.683256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:09:14.683261 | orchestrator | 2026-02-04 01:09:14.683265 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 01:09:14.683269 | orchestrator | Wednesday 04 February 2026 01:07:54 +0000 (0:00:04.639) 0:02:11.918 **** 2026-02-04 01:09:14.683272 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:14.683276 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:14.683280 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:14.683284 | orchestrator | 2026-02-04 01:09:14.683288 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-04 01:09:14.683292 | orchestrator | Wednesday 04 February 2026 01:07:55 +0000 (0:00:00.446) 0:02:12.364 **** 2026-02-04 01:09:14.683296 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:14.683300 | orchestrator | 2026-02-04 01:09:14.683303 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-04 01:09:14.683307 | orchestrator | Wednesday 04 February 2026 01:07:57 +0000 (0:00:02.393) 0:02:14.758 **** 2026-02-04 01:09:14.683311 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:14.683315 | orchestrator | 2026-02-04 01:09:14.683319 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-04 01:09:14.683323 | orchestrator | Wednesday 04 February 2026 01:07:59 +0000 (0:00:02.149) 0:02:16.907 **** 2026-02-04 01:09:14.683327 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:14.683330 | orchestrator | 2026-02-04 01:09:14.683334 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-04 01:09:14.683338 | orchestrator | Wednesday 04 February 2026 01:08:01 +0000 (0:00:02.016) 0:02:18.924 **** 2026-02-04 01:09:14.683342 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:14.683346 | orchestrator | 2026-02-04 01:09:14.683350 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-04 01:09:14.683353 | orchestrator | Wednesday 04 February 2026 01:08:29 +0000 (0:00:27.095) 0:02:46.020 **** 2026-02-04 01:09:14.683357 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:14.683363 | orchestrator | 2026-02-04 01:09:14.683367 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-04 01:09:14.683371 | orchestrator | Wednesday 04 February 2026 01:08:30 +0000 (0:00:01.844) 0:02:47.865 **** 2026-02-04 01:09:14.683375 | orchestrator | 2026-02-04 01:09:14.683382 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-04 01:09:14.683386 | orchestrator | Wednesday 04 February 2026 01:08:31 +0000 (0:00:00.308) 0:02:48.173 **** 2026-02-04 01:09:14.683390 | orchestrator | 2026-02-04 01:09:14.683394 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-04 01:09:14.683398 | orchestrator | Wednesday 04 February 2026 01:08:31 +0000 (0:00:00.068) 0:02:48.241 **** 2026-02-04 01:09:14.683401 | orchestrator | 2026-02-04 01:09:14.683405 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-04 01:09:14.683409 | orchestrator | Wednesday 04 February 2026 01:08:31 +0000 (0:00:00.072) 0:02:48.314 **** 2026-02-04 01:09:14.683413 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:14.683417 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:14.683421 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:14.683424 | orchestrator | 2026-02-04 01:09:14.683428 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:09:14.683433 | orchestrator | testbed-node-0 : ok=27  changed=20  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 01:09:14.683438 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 01:09:14.683442 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 01:09:14.683445 | orchestrator | 2026-02-04 01:09:14.683450 | orchestrator | 2026-02-04 01:09:14.683457 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:09:14.683463 | orchestrator | Wednesday 04 February 2026 01:09:13 +0000 (0:00:41.776) 0:03:30.091 **** 2026-02-04 01:09:14.683468 | orchestrator | =============================================================================== 2026-02-04 01:09:14.683474 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.78s 2026-02-04 01:09:14.683480 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.10s 2026-02-04 01:09:14.683486 | orchestrator | glance : Ensuring config directories exist ----------------------------- 10.66s 2026-02-04 01:09:14.683493 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.27s 2026-02-04 01:09:14.683499 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 7.27s 2026-02-04 01:09:14.683506 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 7.06s 2026-02-04 01:09:14.683512 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.88s 2026-02-04 01:09:14.683518 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 6.65s 2026-02-04 01:09:14.683526 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.51s 2026-02-04 01:09:14.683533 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.48s 2026-02-04 01:09:14.683539 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.46s 2026-02-04 01:09:14.683545 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 6.43s 2026-02-04 01:09:14.683551 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.75s 2026-02-04 01:09:14.683556 | orchestrator | glance : Copying over config.json files for services -------------------- 5.26s 2026-02-04 01:09:14.683562 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.04s 2026-02-04 01:09:14.683568 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.89s 2026-02-04 01:09:14.683574 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.86s 2026-02-04 01:09:14.683585 | orchestrator | glance : Check glance containers ---------------------------------------- 4.64s 2026-02-04 01:09:14.683592 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.38s 2026-02-04 01:09:14.683599 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 4.11s 2026-02-04 01:09:14.683605 | orchestrator | 2026-02-04 01:09:14 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:09:14.683615 | orchestrator | 2026-02-04 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:17.737358 | orchestrator | 2026-02-04 01:09:17 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:09:17.739547 | orchestrator | 2026-02-04 01:09:17 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:17.743795 | orchestrator | 2026-02-04 01:09:17 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:17.745957 | orchestrator | 2026-02-04 01:09:17 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:09:17.746106 | orchestrator | 2026-02-04 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:20.794607 | orchestrator | 2026-02-04 01:09:20 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state STARTED 2026-02-04 01:09:20.798345 | orchestrator | 2026-02-04 01:09:20 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:20.801939 | orchestrator | 2026-02-04 01:09:20 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:20.804773 | orchestrator | 2026-02-04 01:09:20 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:09:20.805978 | orchestrator | 2026-02-04 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:23.841936 | orchestrator | 2026-02-04 01:09:23.842000 | orchestrator | 2026-02-04 01:09:23 | INFO  | Task fd6f1b60-c0ea-4ce5-b34e-bfb9360db6dd is in state SUCCESS 2026-02-04 01:09:23.843554 | orchestrator | 2026-02-04 01:09:23.843607 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:09:23.843616 | orchestrator | 2026-02-04 01:09:23.843622 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:09:23.843628 | orchestrator | Wednesday 04 February 2026 01:06:03 +0000 (0:00:00.276) 0:00:00.276 **** 2026-02-04 01:09:23.843634 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:09:23.843640 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:09:23.843645 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:09:23.843650 | orchestrator | 2026-02-04 01:09:23.843655 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:09:23.843660 | orchestrator | Wednesday 04 February 2026 01:06:04 +0000 (0:00:00.464) 0:00:00.740 **** 2026-02-04 01:09:23.843665 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-04 01:09:23.843671 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-04 01:09:23.843677 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-04 01:09:23.843682 | orchestrator | 2026-02-04 01:09:23.843687 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-04 01:09:23.843693 | orchestrator | 2026-02-04 01:09:23.843698 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 01:09:23.843703 | orchestrator | Wednesday 04 February 2026 01:06:05 +0000 (0:00:01.279) 0:00:02.020 **** 2026-02-04 01:09:23.843708 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:09:23.843715 | orchestrator | 2026-02-04 01:09:23.843721 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-04 01:09:23.843726 | orchestrator | Wednesday 04 February 2026 01:06:06 +0000 (0:00:01.455) 0:00:03.476 **** 2026-02-04 01:09:23.843749 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-04 01:09:23.843755 | orchestrator | 2026-02-04 01:09:23.843760 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-04 01:09:23.843765 | orchestrator | Wednesday 04 February 2026 01:06:11 +0000 (0:00:04.117) 0:00:07.593 **** 2026-02-04 01:09:23.843770 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-04 01:09:23.843775 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-04 01:09:23.843780 | orchestrator | 2026-02-04 01:09:23.843793 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-04 01:09:23.843797 | orchestrator | Wednesday 04 February 2026 01:06:18 +0000 (0:00:07.397) 0:00:14.990 **** 2026-02-04 01:09:23.843802 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:09:23.843807 | orchestrator | 2026-02-04 01:09:23.843812 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-04 01:09:23.843817 | orchestrator | Wednesday 04 February 2026 01:06:22 +0000 (0:00:03.755) 0:00:18.746 **** 2026-02-04 01:09:23.843823 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-04 01:09:23.843828 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:09:23.843833 | orchestrator | 2026-02-04 01:09:23.843838 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-04 01:09:23.843843 | orchestrator | Wednesday 04 February 2026 01:06:26 +0000 (0:00:04.325) 0:00:23.071 **** 2026-02-04 01:09:23.843848 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:09:23.843853 | orchestrator | 2026-02-04 01:09:23.843857 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-04 01:09:23.843862 | orchestrator | Wednesday 04 February 2026 01:06:30 +0000 (0:00:03.439) 0:00:26.510 **** 2026-02-04 01:09:23.843867 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-04 01:09:23.843871 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-04 01:09:23.843876 | orchestrator | 2026-02-04 01:09:23.843881 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-04 01:09:23.843886 | orchestrator | Wednesday 04 February 2026 01:06:37 +0000 (0:00:07.788) 0:00:34.298 **** 2026-02-04 01:09:23.843894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.843912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.843923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.843932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.843938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.843944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.843951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.843962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.843974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.843983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.843988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.843993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.844010 | orchestrator | 2026-02-04 01:09:23.844016 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 01:09:23.844022 | orchestrator | Wednesday 04 February 2026 01:06:40 +0000 (0:00:02.400) 0:00:36.699 **** 2026-02-04 01:09:23.844075 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:23.844083 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:23.844088 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:23.844093 | orchestrator | 2026-02-04 01:09:23.844099 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 01:09:23.844103 | orchestrator | Wednesday 04 February 2026 01:06:40 +0000 (0:00:00.681) 0:00:37.381 **** 2026-02-04 01:09:23.844109 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:09:23.844118 | orchestrator | 2026-02-04 01:09:23.844124 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-04 01:09:23.844136 | orchestrator | Wednesday 04 February 2026 01:06:43 +0000 (0:00:02.673) 0:00:40.054 **** 2026-02-04 01:09:23.844147 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-04 01:09:23.844153 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-04 01:09:23.844158 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-04 01:09:23.844164 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-04 01:09:23.844170 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-04 01:09:23.844175 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-04 01:09:23.844181 | orchestrator | 2026-02-04 01:09:23.844186 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-04 01:09:23.844192 | orchestrator | Wednesday 04 February 2026 01:06:47 +0000 (0:00:03.947) 0:00:44.002 **** 2026-02-04 01:09:23.844198 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:09:23.844209 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:09:23.844216 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:09:23.844222 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:09:23.844236 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:09:23.844242 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:09:23.844251 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:09:23.844257 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:09:23.844262 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:09:23.844275 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:09:23.844286 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:09:23.844292 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:09:23.844298 | orchestrator | 2026-02-04 01:09:23.844306 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-04 01:09:23.844312 | orchestrator | Wednesday 04 February 2026 01:06:52 +0000 (0:00:04.989) 0:00:48.991 **** 2026-02-04 01:09:23.844318 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:09:23.844323 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:09:23.844328 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:09:23.844334 | orchestrator | 2026-02-04 01:09:23.844340 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-04 01:09:23.844345 | orchestrator | Wednesday 04 February 2026 01:06:55 +0000 (0:00:03.059) 0:00:52.051 **** 2026-02-04 01:09:23.844350 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-04 01:09:23.844355 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-04 01:09:23.844361 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-04 01:09:23.844369 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 01:09:23.844378 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 01:09:23.844387 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 01:09:23.844408 | orchestrator | 2026-02-04 01:09:23.844417 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-04 01:09:23.844427 | orchestrator | Wednesday 04 February 2026 01:06:59 +0000 (0:00:03.727) 0:00:55.778 **** 2026-02-04 01:09:23.844436 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-04 01:09:23.844446 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-04 01:09:23.844456 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-04 01:09:23.844463 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-04 01:09:23.844473 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-04 01:09:23.844482 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-04 01:09:23.844491 | orchestrator | 2026-02-04 01:09:23.844500 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-04 01:09:23.844505 | orchestrator | Wednesday 04 February 2026 01:07:00 +0000 (0:00:01.548) 0:00:57.326 **** 2026-02-04 01:09:23.844510 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:23.844516 | orchestrator | 2026-02-04 01:09:23.844521 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-04 01:09:23.844526 | orchestrator | Wednesday 04 February 2026 01:07:00 +0000 (0:00:00.148) 0:00:57.475 **** 2026-02-04 01:09:23.844531 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:23.844537 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:23.844542 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:23.844547 | orchestrator | 2026-02-04 01:09:23.844552 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 01:09:23.844557 | orchestrator | Wednesday 04 February 2026 01:07:01 +0000 (0:00:00.367) 0:00:57.842 **** 2026-02-04 01:09:23.844563 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:09:23.844568 | orchestrator | 2026-02-04 01:09:23.844573 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-04 01:09:23.844582 | orchestrator | Wednesday 04 February 2026 01:07:02 +0000 (0:00:00.867) 0:00:58.710 **** 2026-02-04 01:09:23.844588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.844597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.844603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.844627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.844633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.844643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.844649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.844658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.844668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.844674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.844680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.844689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.844695 | orchestrator | 2026-02-04 01:09:23.844701 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-04 01:09:23.844706 | orchestrator | Wednesday 04 February 2026 01:07:07 +0000 (0:00:05.240) 0:01:03.950 **** 2026-02-04 01:09:23.844712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:09:23.844723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844740 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:23.844750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:09:23.844756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844779 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:23.844785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:09:23.844791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844816 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:23.844822 | orchestrator | 2026-02-04 01:09:23.844827 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-04 01:09:23.844833 | orchestrator | Wednesday 04 February 2026 01:07:08 +0000 (0:00:01.088) 0:01:05.039 **** 2026-02-04 01:09:23.844842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:09:23.844848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844869 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:23.844874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:09:23.844885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844904 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:23.844910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:09:23.844918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.844942 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:23.844947 | orchestrator | 2026-02-04 01:09:23.844953 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-04 01:09:23.844959 | orchestrator | Wednesday 04 February 2026 01:07:11 +0000 (0:00:02.470) 0:01:07.510 **** 2026-02-04 01:09:23.844965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.844971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.844980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.844989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.844997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845075 | orchestrator | 2026-02-04 01:09:23.845080 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-04 01:09:23.845086 | orchestrator | Wednesday 04 February 2026 01:07:17 +0000 (0:00:06.436) 0:01:13.947 **** 2026-02-04 01:09:23.845091 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-04 01:09:23.845097 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-04 01:09:23.845102 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-04 01:09:23.845107 | orchestrator | 2026-02-04 01:09:23.845112 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-04 01:09:23.845118 | orchestrator | Wednesday 04 February 2026 01:07:19 +0000 (0:00:02.408) 0:01:16.355 **** 2026-02-04 01:09:23.845126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.845135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.845142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.845148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845209 | orchestrator | 2026-02-04 01:09:23.845214 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-04 01:09:23.845220 | orchestrator | Wednesday 04 February 2026 01:07:41 +0000 (0:00:21.427) 0:01:37.782 **** 2026-02-04 01:09:23.845226 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:23.845231 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:23.845236 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:23.845242 | orchestrator | 2026-02-04 01:09:23.845247 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-04 01:09:23.845255 | orchestrator | Wednesday 04 February 2026 01:07:43 +0000 (0:00:02.658) 0:01:40.440 **** 2026-02-04 01:09:23.845261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:09:23.845269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:09:23.845274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.845280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.845286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.845299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.845305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.845310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.845318 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:23.845324 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:23.845328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:09:23.845334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.845342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.845351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:09:23.845357 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:23.845362 | orchestrator | 2026-02-04 01:09:23.845368 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-04 01:09:23.845373 | orchestrator | Wednesday 04 February 2026 01:07:45 +0000 (0:00:01.076) 0:01:41.517 **** 2026-02-04 01:09:23.845379 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:23.845384 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:23.845390 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:23.845395 | orchestrator | 2026-02-04 01:09:23.845401 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-04 01:09:23.845406 | orchestrator | Wednesday 04 February 2026 01:07:45 +0000 (0:00:00.419) 0:01:41.936 **** 2026-02-04 01:09:23.845414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.845420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.845430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:09:23.845439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:09:23.845499 | orchestrator | 2026-02-04 01:09:23.845504 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 01:09:23.845510 | orchestrator | Wednesday 04 February 2026 01:07:49 +0000 (0:00:04.302) 0:01:46.239 **** 2026-02-04 01:09:23.845515 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:23.845525 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:23.845531 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:23.845536 | orchestrator | 2026-02-04 01:09:23.845542 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-04 01:09:23.845548 | orchestrator | Wednesday 04 February 2026 01:07:50 +0000 (0:00:00.770) 0:01:47.009 **** 2026-02-04 01:09:23.845553 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:23.845558 | orchestrator | 2026-02-04 01:09:23.845567 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-04 01:09:23.845571 | orchestrator | Wednesday 04 February 2026 01:07:52 +0000 (0:00:02.248) 0:01:49.258 **** 2026-02-04 01:09:23.845577 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:23.845581 | orchestrator | 2026-02-04 01:09:23.845586 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-04 01:09:23.845592 | orchestrator | Wednesday 04 February 2026 01:07:55 +0000 (0:00:02.890) 0:01:52.149 **** 2026-02-04 01:09:23.845598 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:23.845603 | orchestrator | 2026-02-04 01:09:23.845608 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-04 01:09:23.845614 | orchestrator | Wednesday 04 February 2026 01:08:15 +0000 (0:00:19.988) 0:02:12.137 **** 2026-02-04 01:09:23.845619 | orchestrator | 2026-02-04 01:09:23.845624 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-04 01:09:23.845629 | orchestrator | Wednesday 04 February 2026 01:08:15 +0000 (0:00:00.070) 0:02:12.208 **** 2026-02-04 01:09:23.845634 | orchestrator | 2026-02-04 01:09:23.845639 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-04 01:09:23.845645 | orchestrator | Wednesday 04 February 2026 01:08:15 +0000 (0:00:00.066) 0:02:12.275 **** 2026-02-04 01:09:23.845650 | orchestrator | 2026-02-04 01:09:23.845655 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-04 01:09:23.845661 | orchestrator | Wednesday 04 February 2026 01:08:15 +0000 (0:00:00.087) 0:02:12.363 **** 2026-02-04 01:09:23.845666 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:23.845671 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:23.845676 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:23.845681 | orchestrator | 2026-02-04 01:09:23.845687 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-04 01:09:23.845692 | orchestrator | Wednesday 04 February 2026 01:08:34 +0000 (0:00:19.113) 0:02:31.476 **** 2026-02-04 01:09:23.845697 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:23.845702 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:23.845707 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:23.845712 | orchestrator | 2026-02-04 01:09:23.845718 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-04 01:09:23.845723 | orchestrator | Wednesday 04 February 2026 01:08:45 +0000 (0:00:10.372) 0:02:41.848 **** 2026-02-04 01:09:23.845728 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:23.845733 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:23.845737 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:23.845744 | orchestrator | 2026-02-04 01:09:23.845751 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-04 01:09:23.845757 | orchestrator | Wednesday 04 February 2026 01:09:08 +0000 (0:00:23.223) 0:03:05.072 **** 2026-02-04 01:09:23.845761 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:23.845766 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:23.845770 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:23.845775 | orchestrator | 2026-02-04 01:09:23.845780 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-04 01:09:23.845790 | orchestrator | Wednesday 04 February 2026 01:09:21 +0000 (0:00:12.595) 0:03:17.667 **** 2026-02-04 01:09:23.845796 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:23.845801 | orchestrator | 2026-02-04 01:09:23.845806 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:09:23.845812 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 01:09:23.845817 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:09:23.845822 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:09:23.845831 | orchestrator | 2026-02-04 01:09:23.845837 | orchestrator | 2026-02-04 01:09:23.845842 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:09:23.845847 | orchestrator | Wednesday 04 February 2026 01:09:21 +0000 (0:00:00.302) 0:03:17.970 **** 2026-02-04 01:09:23.845852 | orchestrator | =============================================================================== 2026-02-04 01:09:23.845858 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.22s 2026-02-04 01:09:23.845863 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 21.43s 2026-02-04 01:09:23.845868 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.99s 2026-02-04 01:09:23.845873 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 19.11s 2026-02-04 01:09:23.845879 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.60s 2026-02-04 01:09:23.845884 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.38s 2026-02-04 01:09:23.845889 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.79s 2026-02-04 01:09:23.845894 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.39s 2026-02-04 01:09:23.845899 | orchestrator | cinder : Copying over config.json files for services -------------------- 6.44s 2026-02-04 01:09:23.845903 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.24s 2026-02-04 01:09:23.845911 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.99s 2026-02-04 01:09:23.845916 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.33s 2026-02-04 01:09:23.845920 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.30s 2026-02-04 01:09:23.845925 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.12s 2026-02-04 01:09:23.845930 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.95s 2026-02-04 01:09:23.845935 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.76s 2026-02-04 01:09:23.845940 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.73s 2026-02-04 01:09:23.845945 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.44s 2026-02-04 01:09:23.845950 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.06s 2026-02-04 01:09:23.845955 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.89s 2026-02-04 01:09:23.845961 | orchestrator | 2026-02-04 01:09:23 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:23.845965 | orchestrator | 2026-02-04 01:09:23 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:23.848343 | orchestrator | 2026-02-04 01:09:23 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:23.849444 | orchestrator | 2026-02-04 01:09:23 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:09:23.849471 | orchestrator | 2026-02-04 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:26.883688 | orchestrator | 2026-02-04 01:09:26 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:26.885447 | orchestrator | 2026-02-04 01:09:26 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:26.885854 | orchestrator | 2026-02-04 01:09:26 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:26.887337 | orchestrator | 2026-02-04 01:09:26 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:09:26.887371 | orchestrator | 2026-02-04 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:29.928248 | orchestrator | 2026-02-04 01:09:29 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:29.930479 | orchestrator | 2026-02-04 01:09:29 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:29.933499 | orchestrator | 2026-02-04 01:09:29 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:29.935240 | orchestrator | 2026-02-04 01:09:29 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:09:29.935303 | orchestrator | 2026-02-04 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:32.978645 | orchestrator | 2026-02-04 01:09:32 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:32.979985 | orchestrator | 2026-02-04 01:09:32 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:32.982307 | orchestrator | 2026-02-04 01:09:32 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:32.983946 | orchestrator | 2026-02-04 01:09:32 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state STARTED 2026-02-04 01:09:32.983996 | orchestrator | 2026-02-04 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:36.040224 | orchestrator | 2026-02-04 01:09:36 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:36.040461 | orchestrator | 2026-02-04 01:09:36 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:09:36.044288 | orchestrator | 2026-02-04 01:09:36 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:36.046789 | orchestrator | 2026-02-04 01:09:36 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:36.054688 | orchestrator | 2026-02-04 01:09:36 | INFO  | Task 0e457882-4fce-4b8e-a9d0-da52881ff114 is in state SUCCESS 2026-02-04 01:09:36.060209 | orchestrator | 2026-02-04 01:09:36.060266 | orchestrator | 2026-02-04 01:09:36.060275 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:09:36.060282 | orchestrator | 2026-02-04 01:09:36.060289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:09:36.060295 | orchestrator | Wednesday 04 February 2026 01:05:34 +0000 (0:00:00.316) 0:00:00.316 **** 2026-02-04 01:09:36.060302 | orchestrator | ok: [testbed-manager] 2026-02-04 01:09:36.060310 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:09:36.060317 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:09:36.060323 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:09:36.060330 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:09:36.060337 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:09:36.060343 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:09:36.060349 | orchestrator | 2026-02-04 01:09:36.060354 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:09:36.060360 | orchestrator | Wednesday 04 February 2026 01:05:35 +0000 (0:00:01.103) 0:00:01.420 **** 2026-02-04 01:09:36.060437 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-04 01:09:36.060453 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-04 01:09:36.060460 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-04 01:09:36.060466 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-04 01:09:36.060472 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-04 01:09:36.060478 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-04 01:09:36.060485 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-04 01:09:36.060491 | orchestrator | 2026-02-04 01:09:36.060498 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-04 01:09:36.060505 | orchestrator | 2026-02-04 01:09:36.060524 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-04 01:09:36.060531 | orchestrator | Wednesday 04 February 2026 01:05:36 +0000 (0:00:00.851) 0:00:02.271 **** 2026-02-04 01:09:36.060538 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:09:36.060545 | orchestrator | 2026-02-04 01:09:36.060551 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-04 01:09:36.060557 | orchestrator | Wednesday 04 February 2026 01:05:38 +0000 (0:00:01.814) 0:00:04.086 **** 2026-02-04 01:09:36.060565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.060575 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 01:09:36.060582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.060589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.060608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.060619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.060630 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.060637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.060643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.060650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.060705 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.060712 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.060738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.060755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.060766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.060773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.060804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.060811 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.060818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.060830 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 01:09:36.060841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.060852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.060858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.060865 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.060872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.060879 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.060886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.060899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.060910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.060917 | orchestrator | 2026-02-04 01:09:36.060923 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-04 01:09:36.060930 | orchestrator | Wednesday 04 February 2026 01:05:42 +0000 (0:00:03.707) 0:00:07.793 **** 2026-02-04 01:09:36.060937 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:09:36.060943 | orchestrator | 2026-02-04 01:09:36.060949 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-04 01:09:36.060956 | orchestrator | Wednesday 04 February 2026 01:05:44 +0000 (0:00:02.004) 0:00:09.798 **** 2026-02-04 01:09:36.060963 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 01:09:36.060969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.060976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.060983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.060993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.061006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.061013 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.061020 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.061026 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.061054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.061061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.061068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.061082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.061091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.061098 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.061104 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.061110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.061117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.061123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.061130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.061146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.061155 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 01:09:36.061163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.061177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.061183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.061191 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.061200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.061433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.061449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.061455 | orchestrator | 2026-02-04 01:09:36.061461 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-04 01:09:36.061469 | orchestrator | Wednesday 04 February 2026 01:05:51 +0000 (0:00:07.549) 0:00:17.347 **** 2026-02-04 01:09:36.061475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061487 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 01:09:36.061493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061504 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061530 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061537 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 01:09:36.061544 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061591 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:36.061597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061604 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:09:36.061611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061618 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:36.061624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061646 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:36.061656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061677 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.061683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061707 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.061714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061742 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.061748 | orchestrator | 2026-02-04 01:09:36.061754 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-04 01:09:36.061760 | orchestrator | Wednesday 04 February 2026 01:05:53 +0000 (0:00:02.188) 0:00:19.536 **** 2026-02-04 01:09:36.061767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061804 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 01:09:36.061819 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061826 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061833 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 01:09:36.061844 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061892 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:36.061898 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:09:36.061905 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:36.061912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:09:36.061950 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:36.061960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061975 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.061982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.061986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.061994 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.061998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:09:36.062002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.062392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:09:36.062418 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.062425 | orchestrator | 2026-02-04 01:09:36.062432 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-04 01:09:36.062439 | orchestrator | Wednesday 04 February 2026 01:05:55 +0000 (0:00:02.159) 0:00:21.695 **** 2026-02-04 01:09:36.062452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.062468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.062475 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 01:09:36.062483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.062504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.062511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.062524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.062534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.062546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.062553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.062560 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.062567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.062574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.062581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.062592 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.062603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.062614 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.062621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.062627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.062633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.062640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.062647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.062657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.062671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.062677 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 01:09:36.062685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.062691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.062697 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.062704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.062710 | orchestrator | 2026-02-04 01:09:36.062716 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-04 01:09:36.062722 | orchestrator | Wednesday 04 February 2026 01:06:02 +0000 (0:00:06.441) 0:00:28.137 **** 2026-02-04 01:09:36.062728 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:09:36.062738 | orchestrator | 2026-02-04 01:09:36.062744 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-04 01:09:36.062753 | orchestrator | Wednesday 04 February 2026 01:06:03 +0000 (0:00:01.530) 0:00:29.668 **** 2026-02-04 01:09:36.062810 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321591, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9698274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062822 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321591, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9698274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062830 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321591, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9698274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062837 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321591, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9698274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062843 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321591, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9698274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062870 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321591, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9698274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062883 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321629, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.97594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062922 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321629, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.97594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062931 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1321591, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9698274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.062937 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321629, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.97594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062945 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321629, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.97594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062952 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321629, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.97594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062959 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321629, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.97594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062975 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321578, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9669776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062985 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321578, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9669776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062992 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321578, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9669776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.062999 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321578, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9669776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063006 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321578, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9669776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063013 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321616, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.973865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063145 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321616, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.973865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063172 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321616, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.973865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063183 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321578, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9669776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063191 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1321629, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.97594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.063198 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321616, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.973865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063205 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321570, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9641595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063232 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321616, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.973865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063242 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321570, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9641595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063254 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321570, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9641595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063404 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321616, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.973865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063418 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321570, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9641595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063424 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321595, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9705324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063430 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321595, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9705324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063436 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321570, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9641595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063442 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321595, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9705324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063454 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321595, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9705324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063467 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321570, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9641595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063474 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321609, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9737298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063480 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321595, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9705324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063486 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321609, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9737298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063492 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321609, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9737298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063502 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321600, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.971142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063509 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321609, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9737298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063523 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1321578, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9669776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.063529 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321600, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.971142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063535 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321585, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9684632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063542 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321600, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.971142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063548 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321595, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9705324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063558 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321609, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9737298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063564 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321600, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.971142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063587 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321600, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.971142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063600 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321627, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063606 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321585, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9684632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063613 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321585, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9684632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063619 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321585, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9684632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063631 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1321616, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.973865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.063637 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321533, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9548733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063647 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321627, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063656 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321609, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9737298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063662 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321627, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063668 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321585, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9684632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063680 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321627, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063687 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321645, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9780984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063694 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321533, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9548733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063704 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321533, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9548733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063713 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321627, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063720 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321600, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.971142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063726 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321622, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063737 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321533, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9548733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063743 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321645, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9780984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063749 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321533, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9548733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063760 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321585, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9684632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063770 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321645, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9780984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063777 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1321570, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9641595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.063783 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321574, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9658098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063793 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321645, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9780984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063799 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321622, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063805 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321627, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063815 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321622, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063824 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321534, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9550629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063831 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321645, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9780984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063838 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321574, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9658098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063849 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321574, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9658098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063856 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321622, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.063863 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321533, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9548733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064363 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321604, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9724984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064392 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321534, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9550629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064400 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321645, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9780984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064414 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321574, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9658098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064420 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321604, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9724984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064426 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321534, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9550629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064433 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321622, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064445 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321534, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9550629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064455 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321604, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9724984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064462 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321602, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9715729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064473 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321622, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064479 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1321595, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9705324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064485 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321602, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9715729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064491 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321602, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9715729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064501 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321604, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9724984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064510 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321574, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9658098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064516 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321642, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.977697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064527 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:36.064534 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321534, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9550629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064541 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321574, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9658098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064547 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321534, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9550629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064554 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321602, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9715729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064564 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321642, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.977697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064571 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.064581 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321604, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9724984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064592 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1321609, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9737298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064596 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321604, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9724984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064600 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321642, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.977697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064604 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:36.064608 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321642, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.977697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064612 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.064616 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321602, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9715729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064622 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321602, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9715729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064630 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321642, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.977697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064637 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:36.064641 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321642, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.977697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:09:36.064645 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.064649 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1321600, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.971142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064653 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1321585, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9684632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064657 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321627, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064661 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321533, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9548733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064668 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1321645, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9780984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064677 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1321622, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9752927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064681 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1321574, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9658098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064685 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1321534, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9550629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064689 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1321604, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9724984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064693 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1321602, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9715729, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064697 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1321642, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.977697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:09:36.064702 | orchestrator | 2026-02-04 01:09:36.064706 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-04 01:09:36.064710 | orchestrator | Wednesday 04 February 2026 01:06:39 +0000 (0:00:35.119) 0:01:04.787 **** 2026-02-04 01:09:36.064714 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:09:36.064724 | orchestrator | 2026-02-04 01:09:36.064730 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-04 01:09:36.064734 | orchestrator | Wednesday 04 February 2026 01:06:40 +0000 (0:00:01.128) 0:01:05.916 **** 2026-02-04 01:09:36.064738 | orchestrator | [WARNING]: Skipped 2026-02-04 01:09:36.064743 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.064747 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-04 01:09:36.064751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.064757 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-04 01:09:36.064764 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:09:36.064770 | orchestrator | [WARNING]: Skipped 2026-02-04 01:09:36.064777 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.064783 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-04 01:09:36.064789 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.064795 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-04 01:09:36.064802 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:09:36.064808 | orchestrator | [WARNING]: Skipped 2026-02-04 01:09:36.064870 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.064877 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-04 01:09:36.064883 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.064889 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-04 01:09:36.064895 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 01:09:36.064901 | orchestrator | [WARNING]: Skipped 2026-02-04 01:09:36.064907 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.064913 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-04 01:09:36.064919 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.064926 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-04 01:09:36.064932 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 01:09:36.064937 | orchestrator | [WARNING]: Skipped 2026-02-04 01:09:36.064943 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.064949 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-04 01:09:36.064955 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.064961 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-04 01:09:36.065141 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 01:09:36.065153 | orchestrator | [WARNING]: Skipped 2026-02-04 01:09:36.065160 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.065167 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-04 01:09:36.065175 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.065183 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-04 01:09:36.065190 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 01:09:36.065198 | orchestrator | [WARNING]: Skipped 2026-02-04 01:09:36.065206 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.065214 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-04 01:09:36.065223 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:09:36.065231 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-04 01:09:36.065239 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 01:09:36.065247 | orchestrator | 2026-02-04 01:09:36.065263 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-04 01:09:36.065271 | orchestrator | Wednesday 04 February 2026 01:06:45 +0000 (0:00:04.956) 0:01:10.873 **** 2026-02-04 01:09:36.065278 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:09:36.065286 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:09:36.065293 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:09:36.065299 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:36.065306 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:36.065312 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:36.065319 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:09:36.065327 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.065334 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:09:36.065340 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.065347 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:09:36.065353 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.065359 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-04 01:09:36.065366 | orchestrator | 2026-02-04 01:09:36.065372 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-04 01:09:36.065379 | orchestrator | Wednesday 04 February 2026 01:07:09 +0000 (0:00:24.623) 0:01:35.496 **** 2026-02-04 01:09:36.065384 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:09:36.065395 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:36.065402 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:09:36.065408 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:36.065413 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:09:36.065419 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:36.065429 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:09:36.065434 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.065440 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:09:36.065446 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.065451 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:09:36.065457 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.065462 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-04 01:09:36.065467 | orchestrator | 2026-02-04 01:09:36.065473 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-04 01:09:36.065478 | orchestrator | Wednesday 04 February 2026 01:07:14 +0000 (0:00:04.388) 0:01:39.885 **** 2026-02-04 01:09:36.065484 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:09:36.065491 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:09:36.065497 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:09:36.065503 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:36.065508 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:36.065514 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:36.065525 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:09:36.065532 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.065538 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-04 01:09:36.065544 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:09:36.065550 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.065556 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:09:36.065562 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.065568 | orchestrator | 2026-02-04 01:09:36.065575 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-04 01:09:36.065581 | orchestrator | Wednesday 04 February 2026 01:07:17 +0000 (0:00:03.285) 0:01:43.171 **** 2026-02-04 01:09:36.065586 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:09:36.065592 | orchestrator | 2026-02-04 01:09:36.065598 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-04 01:09:36.065604 | orchestrator | Wednesday 04 February 2026 01:07:18 +0000 (0:00:01.363) 0:01:44.534 **** 2026-02-04 01:09:36.065610 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:09:36.065616 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:36.065623 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:36.065628 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:36.065634 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.065640 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.065647 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.065653 | orchestrator | 2026-02-04 01:09:36.065660 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-04 01:09:36.065666 | orchestrator | Wednesday 04 February 2026 01:07:19 +0000 (0:00:00.970) 0:01:45.504 **** 2026-02-04 01:09:36.065671 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:09:36.065677 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.065684 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.065690 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:36.065699 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.065706 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:36.065712 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:36.065718 | orchestrator | 2026-02-04 01:09:36.065724 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-04 01:09:36.065730 | orchestrator | Wednesday 04 February 2026 01:07:23 +0000 (0:00:04.227) 0:01:49.732 **** 2026-02-04 01:09:36.065737 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:09:36.065743 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:09:36.065749 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:09:36.065755 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:36.065761 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:09:36.065767 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:36.065774 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:09:36.065780 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:36.065793 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates2026-02-04 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:36.065800 | orchestrator | /clouds.yml.j2)  2026-02-04 01:09:36.065807 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.065813 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:09:36.065826 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.065832 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:09:36.065843 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.065850 | orchestrator | 2026-02-04 01:09:36.065856 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-04 01:09:36.065862 | orchestrator | Wednesday 04 February 2026 01:07:26 +0000 (0:00:02.704) 0:01:52.437 **** 2026-02-04 01:09:36.065868 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:09:36.065874 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:09:36.065880 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:36.065886 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:36.065892 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:09:36.065898 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:36.065904 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:09:36.065911 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.065916 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-04 01:09:36.065923 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:09:36.065929 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.065936 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:09:36.065942 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.065948 | orchestrator | 2026-02-04 01:09:36.065955 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-04 01:09:36.065961 | orchestrator | Wednesday 04 February 2026 01:07:29 +0000 (0:00:03.168) 0:01:55.605 **** 2026-02-04 01:09:36.065967 | orchestrator | [WARNING]: Skipped 2026-02-04 01:09:36.065974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-04 01:09:36.065980 | orchestrator | due to this access issue: 2026-02-04 01:09:36.065986 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-04 01:09:36.065992 | orchestrator | not a directory 2026-02-04 01:09:36.065999 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:09:36.066005 | orchestrator | 2026-02-04 01:09:36.066011 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-04 01:09:36.066080 | orchestrator | Wednesday 04 February 2026 01:07:31 +0000 (0:00:01.622) 0:01:57.228 **** 2026-02-04 01:09:36.066086 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:09:36.066092 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:36.066098 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:36.066104 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:36.066111 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.066116 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.066122 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.066129 | orchestrator | 2026-02-04 01:09:36.066135 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-04 01:09:36.066141 | orchestrator | Wednesday 04 February 2026 01:07:33 +0000 (0:00:02.260) 0:01:59.488 **** 2026-02-04 01:09:36.066147 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:09:36.066153 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:09:36.066160 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:09:36.066166 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:09:36.066172 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:09:36.066179 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:09:36.066192 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:09:36.066199 | orchestrator | 2026-02-04 01:09:36.066205 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-04 01:09:36.066212 | orchestrator | Wednesday 04 February 2026 01:07:35 +0000 (0:00:01.414) 0:02:00.903 **** 2026-02-04 01:09:36.066219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.066238 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 01:09:36.066264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.066273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.066280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.066286 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.066293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.066304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.066311 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.066322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.066333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.066340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:09:36.066347 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.066353 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.066359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.066374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.066380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.066392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.066402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.066410 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 01:09:36.066420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.066433 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.066440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.066446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.066460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.066467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:09:36.066474 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.066481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.066487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:09:36.066498 | orchestrator | 2026-02-04 01:09:36.066505 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-04 01:09:36.066512 | orchestrator | Wednesday 04 February 2026 01:07:41 +0000 (0:00:06.293) 0:02:07.196 **** 2026-02-04 01:09:36.066519 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-04 01:09:36.066526 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:09:36.066533 | orchestrator | 2026-02-04 01:09:36.066540 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:09:36.066547 | orchestrator | Wednesday 04 February 2026 01:07:43 +0000 (0:00:01.761) 0:02:08.958 **** 2026-02-04 01:09:36.066553 | orchestrator | 2026-02-04 01:09:36.066560 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:09:36.066567 | orchestrator | Wednesday 04 February 2026 01:07:43 +0000 (0:00:00.089) 0:02:09.047 **** 2026-02-04 01:09:36.066573 | orchestrator | 2026-02-04 01:09:36.066580 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:09:36.066586 | orchestrator | Wednesday 04 February 2026 01:07:43 +0000 (0:00:00.094) 0:02:09.142 **** 2026-02-04 01:09:36.066593 | orchestrator | 2026-02-04 01:09:36.066599 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:09:36.066606 | orchestrator | Wednesday 04 February 2026 01:07:43 +0000 (0:00:00.094) 0:02:09.237 **** 2026-02-04 01:09:36.066613 | orchestrator | 2026-02-04 01:09:36.066620 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:09:36.066627 | orchestrator | Wednesday 04 February 2026 01:07:43 +0000 (0:00:00.287) 0:02:09.524 **** 2026-02-04 01:09:36.066634 | orchestrator | 2026-02-04 01:09:36.066641 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:09:36.066647 | orchestrator | Wednesday 04 February 2026 01:07:43 +0000 (0:00:00.071) 0:02:09.596 **** 2026-02-04 01:09:36.066654 | orchestrator | 2026-02-04 01:09:36.066661 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:09:36.066667 | orchestrator | Wednesday 04 February 2026 01:07:43 +0000 (0:00:00.071) 0:02:09.667 **** 2026-02-04 01:09:36.066674 | orchestrator | 2026-02-04 01:09:36.066680 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-04 01:09:36.066685 | orchestrator | Wednesday 04 February 2026 01:07:44 +0000 (0:00:00.107) 0:02:09.775 **** 2026-02-04 01:09:36.066691 | orchestrator | changed: [testbed-manager] 2026-02-04 01:09:36.066697 | orchestrator | 2026-02-04 01:09:36.066707 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-04 01:09:36.066714 | orchestrator | Wednesday 04 February 2026 01:08:08 +0000 (0:00:24.427) 0:02:34.202 **** 2026-02-04 01:09:36.066720 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:09:36.066726 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:36.066731 | orchestrator | changed: [testbed-manager] 2026-02-04 01:09:36.066737 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:36.066743 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:09:36.066749 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:09:36.066754 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:36.066760 | orchestrator | 2026-02-04 01:09:36.066769 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-04 01:09:36.066775 | orchestrator | Wednesday 04 February 2026 01:08:21 +0000 (0:00:13.229) 0:02:47.432 **** 2026-02-04 01:09:36.066781 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:36.066786 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:36.066792 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:36.066803 | orchestrator | 2026-02-04 01:09:36.066809 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-04 01:09:36.066815 | orchestrator | Wednesday 04 February 2026 01:08:32 +0000 (0:00:10.404) 0:02:57.837 **** 2026-02-04 01:09:36.066821 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:36.066827 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:36.066832 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:36.066838 | orchestrator | 2026-02-04 01:09:36.066844 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-04 01:09:36.066850 | orchestrator | Wednesday 04 February 2026 01:08:43 +0000 (0:00:11.231) 0:03:09.069 **** 2026-02-04 01:09:36.066856 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:36.066862 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:36.066869 | orchestrator | changed: [testbed-manager] 2026-02-04 01:09:36.066876 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:09:36.066882 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:09:36.066889 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:09:36.066896 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:36.066903 | orchestrator | 2026-02-04 01:09:36.066909 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-04 01:09:36.066916 | orchestrator | Wednesday 04 February 2026 01:09:00 +0000 (0:00:16.966) 0:03:26.035 **** 2026-02-04 01:09:36.066922 | orchestrator | changed: [testbed-manager] 2026-02-04 01:09:36.066929 | orchestrator | 2026-02-04 01:09:36.066936 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-04 01:09:36.066942 | orchestrator | Wednesday 04 February 2026 01:09:09 +0000 (0:00:09.600) 0:03:35.636 **** 2026-02-04 01:09:36.066948 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:09:36.066955 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:09:36.066962 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:09:36.066968 | orchestrator | 2026-02-04 01:09:36.066974 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-04 01:09:36.066980 | orchestrator | Wednesday 04 February 2026 01:09:16 +0000 (0:00:06.430) 0:03:42.066 **** 2026-02-04 01:09:36.066986 | orchestrator | changed: [testbed-manager] 2026-02-04 01:09:36.066993 | orchestrator | 2026-02-04 01:09:36.066999 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-04 01:09:36.067005 | orchestrator | Wednesday 04 February 2026 01:09:22 +0000 (0:00:05.901) 0:03:47.968 **** 2026-02-04 01:09:36.067012 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:09:36.067018 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:09:36.067025 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:09:36.067047 | orchestrator | 2026-02-04 01:09:36.067054 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:09:36.067062 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-04 01:09:36.067069 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 01:09:36.067075 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 01:09:36.067081 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 01:09:36.067087 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 01:09:36.067094 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 01:09:36.067100 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 01:09:36.067111 | orchestrator | 2026-02-04 01:09:36.067117 | orchestrator | 2026-02-04 01:09:36.067123 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:09:36.067130 | orchestrator | Wednesday 04 February 2026 01:09:32 +0000 (0:00:10.763) 0:03:58.731 **** 2026-02-04 01:09:36.067136 | orchestrator | =============================================================================== 2026-02-04 01:09:36.067142 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 35.12s 2026-02-04 01:09:36.067148 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 24.62s 2026-02-04 01:09:36.067154 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 24.43s 2026-02-04 01:09:36.067161 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.97s 2026-02-04 01:09:36.067173 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.23s 2026-02-04 01:09:36.067180 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.23s 2026-02-04 01:09:36.067186 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.76s 2026-02-04 01:09:36.067193 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.40s 2026-02-04 01:09:36.067199 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.60s 2026-02-04 01:09:36.067210 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.55s 2026-02-04 01:09:36.067216 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.44s 2026-02-04 01:09:36.067222 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.43s 2026-02-04 01:09:36.067228 | orchestrator | prometheus : Check prometheus containers -------------------------------- 6.29s 2026-02-04 01:09:36.067235 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.90s 2026-02-04 01:09:36.067241 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 4.96s 2026-02-04 01:09:36.067247 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.39s 2026-02-04 01:09:36.067253 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.23s 2026-02-04 01:09:36.067259 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.71s 2026-02-04 01:09:36.067265 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.29s 2026-02-04 01:09:36.067272 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.17s 2026-02-04 01:09:39.109570 | orchestrator | 2026-02-04 01:09:39 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:39.112765 | orchestrator | 2026-02-04 01:09:39 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:09:39.115317 | orchestrator | 2026-02-04 01:09:39 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:39.116659 | orchestrator | 2026-02-04 01:09:39 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:39.116731 | orchestrator | 2026-02-04 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:42.170879 | orchestrator | 2026-02-04 01:09:42 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:42.174194 | orchestrator | 2026-02-04 01:09:42 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:09:42.178157 | orchestrator | 2026-02-04 01:09:42 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:42.180145 | orchestrator | 2026-02-04 01:09:42 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:42.180181 | orchestrator | 2026-02-04 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:45.221176 | orchestrator | 2026-02-04 01:09:45 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:45.222179 | orchestrator | 2026-02-04 01:09:45 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:09:45.223781 | orchestrator | 2026-02-04 01:09:45 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:45.224697 | orchestrator | 2026-02-04 01:09:45 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:45.224733 | orchestrator | 2026-02-04 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:48.255639 | orchestrator | 2026-02-04 01:09:48 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:48.256329 | orchestrator | 2026-02-04 01:09:48 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:09:48.257791 | orchestrator | 2026-02-04 01:09:48 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:48.258911 | orchestrator | 2026-02-04 01:09:48 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:48.258966 | orchestrator | 2026-02-04 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:51.313011 | orchestrator | 2026-02-04 01:09:51 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:51.314630 | orchestrator | 2026-02-04 01:09:51 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:09:51.315995 | orchestrator | 2026-02-04 01:09:51 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:51.317825 | orchestrator | 2026-02-04 01:09:51 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:51.317883 | orchestrator | 2026-02-04 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:54.364002 | orchestrator | 2026-02-04 01:09:54 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:54.365423 | orchestrator | 2026-02-04 01:09:54 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:09:54.366961 | orchestrator | 2026-02-04 01:09:54 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:54.369091 | orchestrator | 2026-02-04 01:09:54 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:54.369138 | orchestrator | 2026-02-04 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:57.408665 | orchestrator | 2026-02-04 01:09:57 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:09:57.408723 | orchestrator | 2026-02-04 01:09:57 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:09:57.410251 | orchestrator | 2026-02-04 01:09:57 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:09:57.411152 | orchestrator | 2026-02-04 01:09:57 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:09:57.411170 | orchestrator | 2026-02-04 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:00.453236 | orchestrator | 2026-02-04 01:10:00 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:00.454918 | orchestrator | 2026-02-04 01:10:00 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:00.456579 | orchestrator | 2026-02-04 01:10:00 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:00.458676 | orchestrator | 2026-02-04 01:10:00 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:00.458724 | orchestrator | 2026-02-04 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:03.497076 | orchestrator | 2026-02-04 01:10:03 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:03.497855 | orchestrator | 2026-02-04 01:10:03 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:03.499474 | orchestrator | 2026-02-04 01:10:03 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:03.500143 | orchestrator | 2026-02-04 01:10:03 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:03.500187 | orchestrator | 2026-02-04 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:06.532840 | orchestrator | 2026-02-04 01:10:06 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:06.533777 | orchestrator | 2026-02-04 01:10:06 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:06.535152 | orchestrator | 2026-02-04 01:10:06 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:06.535965 | orchestrator | 2026-02-04 01:10:06 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:06.535989 | orchestrator | 2026-02-04 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:09.576314 | orchestrator | 2026-02-04 01:10:09 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:09.577582 | orchestrator | 2026-02-04 01:10:09 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:09.578454 | orchestrator | 2026-02-04 01:10:09 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:09.579682 | orchestrator | 2026-02-04 01:10:09 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:09.579768 | orchestrator | 2026-02-04 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:12.629525 | orchestrator | 2026-02-04 01:10:12 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:12.630646 | orchestrator | 2026-02-04 01:10:12 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:12.631584 | orchestrator | 2026-02-04 01:10:12 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:12.632339 | orchestrator | 2026-02-04 01:10:12 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:12.632394 | orchestrator | 2026-02-04 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:15.667369 | orchestrator | 2026-02-04 01:10:15 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:15.668026 | orchestrator | 2026-02-04 01:10:15 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:15.669342 | orchestrator | 2026-02-04 01:10:15 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:15.671113 | orchestrator | 2026-02-04 01:10:15 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:15.671144 | orchestrator | 2026-02-04 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:18.708978 | orchestrator | 2026-02-04 01:10:18 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:18.709515 | orchestrator | 2026-02-04 01:10:18 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:18.710425 | orchestrator | 2026-02-04 01:10:18 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:18.711350 | orchestrator | 2026-02-04 01:10:18 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:18.711395 | orchestrator | 2026-02-04 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:21.743265 | orchestrator | 2026-02-04 01:10:21 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:21.743967 | orchestrator | 2026-02-04 01:10:21 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:21.744702 | orchestrator | 2026-02-04 01:10:21 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:21.745586 | orchestrator | 2026-02-04 01:10:21 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:21.745613 | orchestrator | 2026-02-04 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:24.771283 | orchestrator | 2026-02-04 01:10:24 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:24.771908 | orchestrator | 2026-02-04 01:10:24 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:24.772689 | orchestrator | 2026-02-04 01:10:24 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:24.773249 | orchestrator | 2026-02-04 01:10:24 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:24.773380 | orchestrator | 2026-02-04 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:27.798109 | orchestrator | 2026-02-04 01:10:27 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:27.798454 | orchestrator | 2026-02-04 01:10:27 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:27.799281 | orchestrator | 2026-02-04 01:10:27 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:27.800009 | orchestrator | 2026-02-04 01:10:27 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:27.800112 | orchestrator | 2026-02-04 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:30.824889 | orchestrator | 2026-02-04 01:10:30 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:30.825489 | orchestrator | 2026-02-04 01:10:30 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:30.826829 | orchestrator | 2026-02-04 01:10:30 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:30.828552 | orchestrator | 2026-02-04 01:10:30 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:30.828595 | orchestrator | 2026-02-04 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:33.862155 | orchestrator | 2026-02-04 01:10:33 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:33.862918 | orchestrator | 2026-02-04 01:10:33 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:33.863966 | orchestrator | 2026-02-04 01:10:33 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:33.864729 | orchestrator | 2026-02-04 01:10:33 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:33.864753 | orchestrator | 2026-02-04 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:36.902176 | orchestrator | 2026-02-04 01:10:36 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:36.902346 | orchestrator | 2026-02-04 01:10:36 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:36.903353 | orchestrator | 2026-02-04 01:10:36 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:36.903869 | orchestrator | 2026-02-04 01:10:36 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:36.903909 | orchestrator | 2026-02-04 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:39.924345 | orchestrator | 2026-02-04 01:10:39 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:39.924728 | orchestrator | 2026-02-04 01:10:39 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:39.925424 | orchestrator | 2026-02-04 01:10:39 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:39.926103 | orchestrator | 2026-02-04 01:10:39 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:39.926121 | orchestrator | 2026-02-04 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:42.958278 | orchestrator | 2026-02-04 01:10:42 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:42.958674 | orchestrator | 2026-02-04 01:10:42 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:42.959270 | orchestrator | 2026-02-04 01:10:42 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:42.960003 | orchestrator | 2026-02-04 01:10:42 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:42.960019 | orchestrator | 2026-02-04 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:45.984239 | orchestrator | 2026-02-04 01:10:45 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:45.984294 | orchestrator | 2026-02-04 01:10:45 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:45.984500 | orchestrator | 2026-02-04 01:10:45 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:45.985135 | orchestrator | 2026-02-04 01:10:45 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:45.985158 | orchestrator | 2026-02-04 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:49.013565 | orchestrator | 2026-02-04 01:10:49 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:49.013830 | orchestrator | 2026-02-04 01:10:49 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:49.014598 | orchestrator | 2026-02-04 01:10:49 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:49.015189 | orchestrator | 2026-02-04 01:10:49 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:49.015450 | orchestrator | 2026-02-04 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:52.049221 | orchestrator | 2026-02-04 01:10:52 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:52.049918 | orchestrator | 2026-02-04 01:10:52 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:52.051078 | orchestrator | 2026-02-04 01:10:52 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:52.052212 | orchestrator | 2026-02-04 01:10:52 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:52.052249 | orchestrator | 2026-02-04 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:55.125397 | orchestrator | 2026-02-04 01:10:55 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:55.125454 | orchestrator | 2026-02-04 01:10:55 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:55.125462 | orchestrator | 2026-02-04 01:10:55 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:55.125468 | orchestrator | 2026-02-04 01:10:55 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:55.125475 | orchestrator | 2026-02-04 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:58.138994 | orchestrator | 2026-02-04 01:10:58 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:10:58.139370 | orchestrator | 2026-02-04 01:10:58 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:10:58.140358 | orchestrator | 2026-02-04 01:10:58 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:10:58.141233 | orchestrator | 2026-02-04 01:10:58 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:10:58.141291 | orchestrator | 2026-02-04 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:01.177098 | orchestrator | 2026-02-04 01:11:01 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:01.177299 | orchestrator | 2026-02-04 01:11:01 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:01.178127 | orchestrator | 2026-02-04 01:11:01 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:01.179958 | orchestrator | 2026-02-04 01:11:01 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:11:01.179994 | orchestrator | 2026-02-04 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:04.208088 | orchestrator | 2026-02-04 01:11:04 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:04.210519 | orchestrator | 2026-02-04 01:11:04 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:04.210567 | orchestrator | 2026-02-04 01:11:04 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:04.210573 | orchestrator | 2026-02-04 01:11:04 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:11:04.210578 | orchestrator | 2026-02-04 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:07.233664 | orchestrator | 2026-02-04 01:11:07 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:07.234113 | orchestrator | 2026-02-04 01:11:07 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:07.236163 | orchestrator | 2026-02-04 01:11:07 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:07.236202 | orchestrator | 2026-02-04 01:11:07 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:11:07.236210 | orchestrator | 2026-02-04 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:10.262964 | orchestrator | 2026-02-04 01:11:10 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:10.264781 | orchestrator | 2026-02-04 01:11:10 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:10.265402 | orchestrator | 2026-02-04 01:11:10 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:10.269428 | orchestrator | 2026-02-04 01:11:10 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:11:10.269491 | orchestrator | 2026-02-04 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:13.289385 | orchestrator | 2026-02-04 01:11:13 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:13.289978 | orchestrator | 2026-02-04 01:11:13 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:13.290838 | orchestrator | 2026-02-04 01:11:13 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:13.291523 | orchestrator | 2026-02-04 01:11:13 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:11:13.291549 | orchestrator | 2026-02-04 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:16.351506 | orchestrator | 2026-02-04 01:11:16 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:16.352412 | orchestrator | 2026-02-04 01:11:16 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:16.353992 | orchestrator | 2026-02-04 01:11:16 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:16.355119 | orchestrator | 2026-02-04 01:11:16 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:11:16.355341 | orchestrator | 2026-02-04 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:19.414451 | orchestrator | 2026-02-04 01:11:19 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:19.415099 | orchestrator | 2026-02-04 01:11:19 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:19.415791 | orchestrator | 2026-02-04 01:11:19 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:19.416584 | orchestrator | 2026-02-04 01:11:19 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:11:19.416663 | orchestrator | 2026-02-04 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:22.457348 | orchestrator | 2026-02-04 01:11:22 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:22.457790 | orchestrator | 2026-02-04 01:11:22 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:22.458779 | orchestrator | 2026-02-04 01:11:22 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:22.459500 | orchestrator | 2026-02-04 01:11:22 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:11:22.459576 | orchestrator | 2026-02-04 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:25.480519 | orchestrator | 2026-02-04 01:11:25 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:25.480844 | orchestrator | 2026-02-04 01:11:25 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:25.481800 | orchestrator | 2026-02-04 01:11:25 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:25.483146 | orchestrator | 2026-02-04 01:11:25 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:11:25.483165 | orchestrator | 2026-02-04 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:28.516060 | orchestrator | 2026-02-04 01:11:28 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:28.516557 | orchestrator | 2026-02-04 01:11:28 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:28.517520 | orchestrator | 2026-02-04 01:11:28 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:28.519012 | orchestrator | 2026-02-04 01:11:28 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:11:28.519061 | orchestrator | 2026-02-04 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:31.615827 | orchestrator | 2026-02-04 01:11:31 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:31.616133 | orchestrator | 2026-02-04 01:11:31 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:31.617152 | orchestrator | 2026-02-04 01:11:31 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:31.617846 | orchestrator | 2026-02-04 01:11:31 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state STARTED 2026-02-04 01:11:31.617884 | orchestrator | 2026-02-04 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:34.638582 | orchestrator | 2026-02-04 01:11:34 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:11:34.639603 | orchestrator | 2026-02-04 01:11:34 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:34.639926 | orchestrator | 2026-02-04 01:11:34 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:34.640514 | orchestrator | 2026-02-04 01:11:34 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:34.641614 | orchestrator | 2026-02-04 01:11:34 | INFO  | Task 6598ee3a-f66e-4301-b76f-72a4be67b4a0 is in state SUCCESS 2026-02-04 01:11:34.642946 | orchestrator | 2026-02-04 01:11:34.642995 | orchestrator | 2026-02-04 01:11:34.643005 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:11:34.643013 | orchestrator | 2026-02-04 01:11:34.643019 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:11:34.643026 | orchestrator | Wednesday 04 February 2026 01:09:28 +0000 (0:00:00.304) 0:00:00.304 **** 2026-02-04 01:11:34.643093 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:11:34.643104 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:11:34.643110 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:11:34.643117 | orchestrator | 2026-02-04 01:11:34.643124 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:11:34.643130 | orchestrator | Wednesday 04 February 2026 01:09:28 +0000 (0:00:00.326) 0:00:00.631 **** 2026-02-04 01:11:34.643137 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-04 01:11:34.643144 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-04 01:11:34.643151 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-04 01:11:34.643156 | orchestrator | 2026-02-04 01:11:34.643160 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-04 01:11:34.643164 | orchestrator | 2026-02-04 01:11:34.643168 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-04 01:11:34.643172 | orchestrator | Wednesday 04 February 2026 01:09:29 +0000 (0:00:00.473) 0:00:01.105 **** 2026-02-04 01:11:34.643176 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:11:34.643181 | orchestrator | 2026-02-04 01:11:34.643185 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-04 01:11:34.643190 | orchestrator | Wednesday 04 February 2026 01:09:29 +0000 (0:00:00.570) 0:00:01.675 **** 2026-02-04 01:11:34.643206 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-04 01:11:34.643229 | orchestrator | 2026-02-04 01:11:34.643233 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-04 01:11:34.643237 | orchestrator | Wednesday 04 February 2026 01:09:32 +0000 (0:00:02.883) 0:00:04.559 **** 2026-02-04 01:11:34.643241 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-04 01:11:34.643245 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-04 01:11:34.643249 | orchestrator | 2026-02-04 01:11:34.643255 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-04 01:11:34.643261 | orchestrator | Wednesday 04 February 2026 01:09:38 +0000 (0:00:06.101) 0:00:10.660 **** 2026-02-04 01:11:34.643267 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:11:34.643273 | orchestrator | 2026-02-04 01:11:34.643279 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-04 01:11:34.643285 | orchestrator | Wednesday 04 February 2026 01:09:42 +0000 (0:00:03.408) 0:00:14.069 **** 2026-02-04 01:11:34.643291 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-04 01:11:34.643296 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:11:34.643302 | orchestrator | 2026-02-04 01:11:34.643308 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-04 01:11:34.643314 | orchestrator | Wednesday 04 February 2026 01:09:45 +0000 (0:00:03.842) 0:00:17.912 **** 2026-02-04 01:11:34.643321 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:11:34.643327 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-04 01:11:34.643333 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-04 01:11:34.643339 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-04 01:11:34.643345 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-04 01:11:34.643351 | orchestrator | 2026-02-04 01:11:34.643358 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-04 01:11:34.643363 | orchestrator | Wednesday 04 February 2026 01:10:02 +0000 (0:00:16.756) 0:00:34.669 **** 2026-02-04 01:11:34.643366 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-04 01:11:34.643370 | orchestrator | 2026-02-04 01:11:34.643374 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-04 01:11:34.643378 | orchestrator | Wednesday 04 February 2026 01:10:06 +0000 (0:00:04.383) 0:00:39.052 **** 2026-02-04 01:11:34.643385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.643477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.643510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.643528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643578 | orchestrator | 2026-02-04 01:11:34.643582 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-04 01:11:34.643586 | orchestrator | Wednesday 04 February 2026 01:10:08 +0000 (0:00:01.946) 0:00:41.001 **** 2026-02-04 01:11:34.643590 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-04 01:11:34.643594 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-04 01:11:34.643598 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-04 01:11:34.643601 | orchestrator | 2026-02-04 01:11:34.643605 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-04 01:11:34.643609 | orchestrator | Wednesday 04 February 2026 01:10:10 +0000 (0:00:01.363) 0:00:42.364 **** 2026-02-04 01:11:34.643613 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:11:34.643617 | orchestrator | 2026-02-04 01:11:34.643621 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-04 01:11:34.643626 | orchestrator | Wednesday 04 February 2026 01:10:10 +0000 (0:00:00.254) 0:00:42.618 **** 2026-02-04 01:11:34.643629 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:11:34.643633 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:11:34.643637 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:11:34.643641 | orchestrator | 2026-02-04 01:11:34.643645 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-04 01:11:34.643649 | orchestrator | Wednesday 04 February 2026 01:10:11 +0000 (0:00:01.286) 0:00:43.905 **** 2026-02-04 01:11:34.643653 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:11:34.643657 | orchestrator | 2026-02-04 01:11:34.643660 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-04 01:11:34.643664 | orchestrator | Wednesday 04 February 2026 01:10:13 +0000 (0:00:01.415) 0:00:45.321 **** 2026-02-04 01:11:34.643668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.643676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.643687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.643691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.643725 | orchestrator | 2026-02-04 01:11:34.643729 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-04 01:11:34.643733 | orchestrator | Wednesday 04 February 2026 01:10:17 +0000 (0:00:04.323) 0:00:49.644 **** 2026-02-04 01:11:34.643744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:11:34.643749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643763 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:11:34.643770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:11:34.643774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643785 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:11:34.643789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:11:34.643793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643805 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:11:34.643809 | orchestrator | 2026-02-04 01:11:34.643813 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-04 01:11:34.643817 | orchestrator | Wednesday 04 February 2026 01:10:19 +0000 (0:00:01.901) 0:00:51.546 **** 2026-02-04 01:11:34.643825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:11:34.643829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643840 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:11:34.643844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:11:34.643851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643860 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:11:34.643867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:11:34.643874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.643882 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:11:34.643886 | orchestrator | 2026-02-04 01:11:34.643890 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-04 01:11:34.643894 | orchestrator | Wednesday 04 February 2026 01:10:21 +0000 (0:00:02.131) 0:00:53.678 **** 2026-02-04 01:11:34.643898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.644156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.644174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.644181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644236 | orchestrator | 2026-02-04 01:11:34.644240 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-04 01:11:34.644245 | orchestrator | Wednesday 04 February 2026 01:10:26 +0000 (0:00:04.727) 0:00:58.405 **** 2026-02-04 01:11:34.644249 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:11:34.644253 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:11:34.644257 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:11:34.644261 | orchestrator | 2026-02-04 01:11:34.644265 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-04 01:11:34.644297 | orchestrator | Wednesday 04 February 2026 01:10:29 +0000 (0:00:03.259) 0:01:01.665 **** 2026-02-04 01:11:34.644301 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:11:34.644305 | orchestrator | 2026-02-04 01:11:34.644309 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-04 01:11:34.644315 | orchestrator | Wednesday 04 February 2026 01:10:30 +0000 (0:00:01.230) 0:01:02.895 **** 2026-02-04 01:11:34.644319 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:11:34.644323 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:11:34.644327 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:11:34.644331 | orchestrator | 2026-02-04 01:11:34.644334 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-04 01:11:34.644338 | orchestrator | Wednesday 04 February 2026 01:10:31 +0000 (0:00:01.090) 0:01:03.986 **** 2026-02-04 01:11:34.644342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.644349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.644357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.644362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644393 | orchestrator | 2026-02-04 01:11:34.644397 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-04 01:11:34.644401 | orchestrator | Wednesday 04 February 2026 01:10:45 +0000 (0:00:13.783) 0:01:17.770 **** 2026-02-04 01:11:34.644407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:11:34.644414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.644424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.644430 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:11:34.644436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:11:34.644445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.644458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.644464 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:11:34.644476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:11:34.644487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.644493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:11:34.644499 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:11:34.644505 | orchestrator | 2026-02-04 01:11:34.644511 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-04 01:11:34.644518 | orchestrator | Wednesday 04 February 2026 01:10:46 +0000 (0:00:01.223) 0:01:18.993 **** 2026-02-04 01:11:34.644524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.644535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.644545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:11:34.644557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:11:34.644614 | orchestrator | 2026-02-04 01:11:34.644621 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-04 01:11:34.644627 | orchestrator | Wednesday 04 February 2026 01:10:50 +0000 (0:00:03.932) 0:01:22.926 **** 2026-02-04 01:11:34.644633 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:11:34.644639 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:11:34.644649 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:11:34.644656 | orchestrator | 2026-02-04 01:11:34.644662 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-04 01:11:34.644668 | orchestrator | Wednesday 04 February 2026 01:10:51 +0000 (0:00:00.392) 0:01:23.319 **** 2026-02-04 01:11:34.644675 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:11:34.644680 | orchestrator | 2026-02-04 01:11:34.644687 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-04 01:11:34.644694 | orchestrator | Wednesday 04 February 2026 01:10:53 +0000 (0:00:02.180) 0:01:25.499 **** 2026-02-04 01:11:34.644700 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:11:34.644706 | orchestrator | 2026-02-04 01:11:34.644712 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-04 01:11:34.644718 | orchestrator | Wednesday 04 February 2026 01:10:55 +0000 (0:00:02.210) 0:01:27.710 **** 2026-02-04 01:11:34.644724 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:11:34.644730 | orchestrator | 2026-02-04 01:11:34.644737 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-04 01:11:34.644743 | orchestrator | Wednesday 04 February 2026 01:11:07 +0000 (0:00:11.809) 0:01:39.520 **** 2026-02-04 01:11:34.644748 | orchestrator | 2026-02-04 01:11:34.644753 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-04 01:11:34.644759 | orchestrator | Wednesday 04 February 2026 01:11:07 +0000 (0:00:00.148) 0:01:39.669 **** 2026-02-04 01:11:34.644764 | orchestrator | 2026-02-04 01:11:34.644770 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-04 01:11:34.644775 | orchestrator | Wednesday 04 February 2026 01:11:07 +0000 (0:00:00.125) 0:01:39.794 **** 2026-02-04 01:11:34.644780 | orchestrator | 2026-02-04 01:11:34.644786 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-04 01:11:34.644791 | orchestrator | Wednesday 04 February 2026 01:11:07 +0000 (0:00:00.158) 0:01:39.952 **** 2026-02-04 01:11:34.644797 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:11:34.644803 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:11:34.644809 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:11:34.644814 | orchestrator | 2026-02-04 01:11:34.644820 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-04 01:11:34.644827 | orchestrator | Wednesday 04 February 2026 01:11:15 +0000 (0:00:07.481) 0:01:47.434 **** 2026-02-04 01:11:34.644833 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:11:34.644839 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:11:34.644845 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:11:34.644851 | orchestrator | 2026-02-04 01:11:34.644857 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-04 01:11:34.644863 | orchestrator | Wednesday 04 February 2026 01:11:25 +0000 (0:00:10.381) 0:01:57.815 **** 2026-02-04 01:11:34.644871 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:11:34.644877 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:11:34.644882 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:11:34.644888 | orchestrator | 2026-02-04 01:11:34.644895 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:11:34.644910 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:11:34.644918 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:11:34.644925 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:11:34.644933 | orchestrator | 2026-02-04 01:11:34.644939 | orchestrator | 2026-02-04 01:11:34.644946 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:11:34.644953 | orchestrator | Wednesday 04 February 2026 01:11:32 +0000 (0:00:07.118) 0:02:04.933 **** 2026-02-04 01:11:34.644959 | orchestrator | =============================================================================== 2026-02-04 01:11:34.644966 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.76s 2026-02-04 01:11:34.644979 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 13.78s 2026-02-04 01:11:34.644987 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.81s 2026-02-04 01:11:34.645064 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.38s 2026-02-04 01:11:34.645073 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.48s 2026-02-04 01:11:34.645077 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.12s 2026-02-04 01:11:34.645082 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.10s 2026-02-04 01:11:34.645087 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.73s 2026-02-04 01:11:34.645092 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.38s 2026-02-04 01:11:34.645096 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.32s 2026-02-04 01:11:34.645101 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.93s 2026-02-04 01:11:34.645106 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.84s 2026-02-04 01:11:34.645111 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.41s 2026-02-04 01:11:34.645115 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.26s 2026-02-04 01:11:34.645122 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 2.88s 2026-02-04 01:11:34.645129 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.21s 2026-02-04 01:11:34.645136 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.18s 2026-02-04 01:11:34.645149 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.13s 2026-02-04 01:11:34.645155 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.95s 2026-02-04 01:11:34.645162 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.90s 2026-02-04 01:11:34.645168 | orchestrator | 2026-02-04 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:37.675352 | orchestrator | 2026-02-04 01:11:37 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:11:37.678234 | orchestrator | 2026-02-04 01:11:37 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:37.679177 | orchestrator | 2026-02-04 01:11:37 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:37.680486 | orchestrator | 2026-02-04 01:11:37 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:37.680518 | orchestrator | 2026-02-04 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:40.714515 | orchestrator | 2026-02-04 01:11:40 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:11:40.715485 | orchestrator | 2026-02-04 01:11:40 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:40.716083 | orchestrator | 2026-02-04 01:11:40 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:40.717697 | orchestrator | 2026-02-04 01:11:40 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:40.717723 | orchestrator | 2026-02-04 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:43.753514 | orchestrator | 2026-02-04 01:11:43 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:11:43.754318 | orchestrator | 2026-02-04 01:11:43 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:43.754954 | orchestrator | 2026-02-04 01:11:43 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:43.755992 | orchestrator | 2026-02-04 01:11:43 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:43.756011 | orchestrator | 2026-02-04 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:46.787640 | orchestrator | 2026-02-04 01:11:46 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:11:46.788547 | orchestrator | 2026-02-04 01:11:46 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:46.789766 | orchestrator | 2026-02-04 01:11:46 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:46.790731 | orchestrator | 2026-02-04 01:11:46 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:46.790771 | orchestrator | 2026-02-04 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:49.834881 | orchestrator | 2026-02-04 01:11:49 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:11:49.835513 | orchestrator | 2026-02-04 01:11:49 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:49.838454 | orchestrator | 2026-02-04 01:11:49 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:49.839142 | orchestrator | 2026-02-04 01:11:49 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:49.839192 | orchestrator | 2026-02-04 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:52.866430 | orchestrator | 2026-02-04 01:11:52 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:11:52.866914 | orchestrator | 2026-02-04 01:11:52 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:52.868492 | orchestrator | 2026-02-04 01:11:52 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:52.869211 | orchestrator | 2026-02-04 01:11:52 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:52.869252 | orchestrator | 2026-02-04 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:55.897439 | orchestrator | 2026-02-04 01:11:55 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:11:55.898045 | orchestrator | 2026-02-04 01:11:55 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:55.898951 | orchestrator | 2026-02-04 01:11:55 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:55.899727 | orchestrator | 2026-02-04 01:11:55 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:55.899818 | orchestrator | 2026-02-04 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:58.958749 | orchestrator | 2026-02-04 01:11:58 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:11:58.959081 | orchestrator | 2026-02-04 01:11:58 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:11:58.960245 | orchestrator | 2026-02-04 01:11:58 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:11:58.961848 | orchestrator | 2026-02-04 01:11:58 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:11:58.961869 | orchestrator | 2026-02-04 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:02.005543 | orchestrator | 2026-02-04 01:12:02 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:12:02.006587 | orchestrator | 2026-02-04 01:12:02 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:02.008308 | orchestrator | 2026-02-04 01:12:02 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:02.012161 | orchestrator | 2026-02-04 01:12:02 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:02.012265 | orchestrator | 2026-02-04 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:05.096335 | orchestrator | 2026-02-04 01:12:05 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:12:05.097553 | orchestrator | 2026-02-04 01:12:05 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:05.099160 | orchestrator | 2026-02-04 01:12:05 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:05.102684 | orchestrator | 2026-02-04 01:12:05 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:05.102746 | orchestrator | 2026-02-04 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:08.172720 | orchestrator | 2026-02-04 01:12:08 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:12:08.172780 | orchestrator | 2026-02-04 01:12:08 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:08.172908 | orchestrator | 2026-02-04 01:12:08 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:08.174068 | orchestrator | 2026-02-04 01:12:08 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:08.174120 | orchestrator | 2026-02-04 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:11.204670 | orchestrator | 2026-02-04 01:12:11 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:12:11.206169 | orchestrator | 2026-02-04 01:12:11 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:11.206858 | orchestrator | 2026-02-04 01:12:11 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:11.208720 | orchestrator | 2026-02-04 01:12:11 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:11.208829 | orchestrator | 2026-02-04 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:14.256365 | orchestrator | 2026-02-04 01:12:14 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:12:14.257182 | orchestrator | 2026-02-04 01:12:14 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:14.258453 | orchestrator | 2026-02-04 01:12:14 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:14.261196 | orchestrator | 2026-02-04 01:12:14 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:14.261355 | orchestrator | 2026-02-04 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:17.298199 | orchestrator | 2026-02-04 01:12:17 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:12:17.298961 | orchestrator | 2026-02-04 01:12:17 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:17.299578 | orchestrator | 2026-02-04 01:12:17 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:17.300533 | orchestrator | 2026-02-04 01:12:17 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:17.300555 | orchestrator | 2026-02-04 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:20.345533 | orchestrator | 2026-02-04 01:12:20 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:12:20.347019 | orchestrator | 2026-02-04 01:12:20 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:20.347134 | orchestrator | 2026-02-04 01:12:20 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:20.347281 | orchestrator | 2026-02-04 01:12:20 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:20.347366 | orchestrator | 2026-02-04 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:23.379722 | orchestrator | 2026-02-04 01:12:23 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:12:23.382145 | orchestrator | 2026-02-04 01:12:23 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:23.385357 | orchestrator | 2026-02-04 01:12:23 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:23.386496 | orchestrator | 2026-02-04 01:12:23 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:23.386518 | orchestrator | 2026-02-04 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:26.412521 | orchestrator | 2026-02-04 01:12:26 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:12:26.413138 | orchestrator | 2026-02-04 01:12:26 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:26.414812 | orchestrator | 2026-02-04 01:12:26 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:26.415590 | orchestrator | 2026-02-04 01:12:26 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:26.415642 | orchestrator | 2026-02-04 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:29.448798 | orchestrator | 2026-02-04 01:12:29 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state STARTED 2026-02-04 01:12:29.449149 | orchestrator | 2026-02-04 01:12:29 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:29.449879 | orchestrator | 2026-02-04 01:12:29 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:29.454198 | orchestrator | 2026-02-04 01:12:29 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:29.454252 | orchestrator | 2026-02-04 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:32.485362 | orchestrator | 2026-02-04 01:12:32 | INFO  | Task e2517f23-d1d1-42a5-ab66-a18c5aaf73f1 is in state SUCCESS 2026-02-04 01:12:32.485632 | orchestrator | 2026-02-04 01:12:32 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:32.487559 | orchestrator | 2026-02-04 01:12:32 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:32.488209 | orchestrator | 2026-02-04 01:12:32 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:12:32.489103 | orchestrator | 2026-02-04 01:12:32 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:32.489125 | orchestrator | 2026-02-04 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:35.528860 | orchestrator | 2026-02-04 01:12:35 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:35.529568 | orchestrator | 2026-02-04 01:12:35 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:35.532519 | orchestrator | 2026-02-04 01:12:35 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:12:35.533438 | orchestrator | 2026-02-04 01:12:35 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:35.533519 | orchestrator | 2026-02-04 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:38.571242 | orchestrator | 2026-02-04 01:12:38 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:38.571600 | orchestrator | 2026-02-04 01:12:38 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:38.572795 | orchestrator | 2026-02-04 01:12:38 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:12:38.573600 | orchestrator | 2026-02-04 01:12:38 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:38.573642 | orchestrator | 2026-02-04 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:41.609809 | orchestrator | 2026-02-04 01:12:41 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:41.610338 | orchestrator | 2026-02-04 01:12:41 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:41.611428 | orchestrator | 2026-02-04 01:12:41 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:12:41.613453 | orchestrator | 2026-02-04 01:12:41 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:41.613496 | orchestrator | 2026-02-04 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:44.651606 | orchestrator | 2026-02-04 01:12:44 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:44.654566 | orchestrator | 2026-02-04 01:12:44 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:44.656965 | orchestrator | 2026-02-04 01:12:44 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:12:44.658991 | orchestrator | 2026-02-04 01:12:44 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:44.659122 | orchestrator | 2026-02-04 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:47.697615 | orchestrator | 2026-02-04 01:12:47 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:47.697672 | orchestrator | 2026-02-04 01:12:47 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:47.699287 | orchestrator | 2026-02-04 01:12:47 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:12:47.702698 | orchestrator | 2026-02-04 01:12:47 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:47.702749 | orchestrator | 2026-02-04 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:50.755255 | orchestrator | 2026-02-04 01:12:50 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:50.756653 | orchestrator | 2026-02-04 01:12:50 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:50.757973 | orchestrator | 2026-02-04 01:12:50 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:12:50.760346 | orchestrator | 2026-02-04 01:12:50 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:50.760382 | orchestrator | 2026-02-04 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:53.802171 | orchestrator | 2026-02-04 01:12:53 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:53.803539 | orchestrator | 2026-02-04 01:12:53 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:53.805708 | orchestrator | 2026-02-04 01:12:53 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:12:53.807804 | orchestrator | 2026-02-04 01:12:53 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:53.808561 | orchestrator | 2026-02-04 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:56.853988 | orchestrator | 2026-02-04 01:12:56 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:56.854106 | orchestrator | 2026-02-04 01:12:56 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:56.854919 | orchestrator | 2026-02-04 01:12:56 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:12:56.858007 | orchestrator | 2026-02-04 01:12:56 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:56.858100 | orchestrator | 2026-02-04 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:59.893767 | orchestrator | 2026-02-04 01:12:59 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:12:59.895121 | orchestrator | 2026-02-04 01:12:59 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:12:59.896739 | orchestrator | 2026-02-04 01:12:59 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:12:59.898551 | orchestrator | 2026-02-04 01:12:59 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:12:59.898601 | orchestrator | 2026-02-04 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:02.932740 | orchestrator | 2026-02-04 01:13:02 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:02.932793 | orchestrator | 2026-02-04 01:13:02 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state STARTED 2026-02-04 01:13:02.934631 | orchestrator | 2026-02-04 01:13:02 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:02.935335 | orchestrator | 2026-02-04 01:13:02 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:02.935362 | orchestrator | 2026-02-04 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:05.981858 | orchestrator | 2026-02-04 01:13:05 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:05.985518 | orchestrator | 2026-02-04 01:13:05 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:05.989293 | orchestrator | 2026-02-04 01:13:05 | INFO  | Task a36b03f7-eb98-4e8a-8cb0-ab2bc3675530 is in state SUCCESS 2026-02-04 01:13:05.991943 | orchestrator | 2026-02-04 01:13:05.991988 | orchestrator | 2026-02-04 01:13:05.991993 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-02-04 01:13:05.991998 | orchestrator | 2026-02-04 01:13:05.992002 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-02-04 01:13:05.992006 | orchestrator | Wednesday 04 February 2026 01:11:43 +0000 (0:00:00.488) 0:00:00.488 **** 2026-02-04 01:13:05.992060 | orchestrator | changed: [localhost] 2026-02-04 01:13:05.992069 | orchestrator | 2026-02-04 01:13:05.992075 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-02-04 01:13:05.992081 | orchestrator | Wednesday 04 February 2026 01:11:44 +0000 (0:00:01.451) 0:00:01.940 **** 2026-02-04 01:13:05.992087 | orchestrator | changed: [localhost] 2026-02-04 01:13:05.992094 | orchestrator | 2026-02-04 01:13:05.992100 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-02-04 01:13:05.992107 | orchestrator | Wednesday 04 February 2026 01:12:23 +0000 (0:00:38.494) 0:00:40.434 **** 2026-02-04 01:13:05.992113 | orchestrator | changed: [localhost] 2026-02-04 01:13:05.992120 | orchestrator | 2026-02-04 01:13:05.992127 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:13:05.992131 | orchestrator | 2026-02-04 01:13:05.992135 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:13:05.992139 | orchestrator | Wednesday 04 February 2026 01:12:27 +0000 (0:00:04.499) 0:00:44.933 **** 2026-02-04 01:13:05.992143 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:05.992146 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:13:05.992151 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:13:05.992157 | orchestrator | 2026-02-04 01:13:05.992162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:13:05.992171 | orchestrator | Wednesday 04 February 2026 01:12:28 +0000 (0:00:00.457) 0:00:45.391 **** 2026-02-04 01:13:05.992180 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-02-04 01:13:05.992186 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-02-04 01:13:05.992193 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-02-04 01:13:05.992199 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-02-04 01:13:05.992206 | orchestrator | 2026-02-04 01:13:05.992212 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-02-04 01:13:05.992217 | orchestrator | skipping: no hosts matched 2026-02-04 01:13:05.992224 | orchestrator | 2026-02-04 01:13:05.992231 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:13:05.992239 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:13:05.992246 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:13:05.992254 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:13:05.992260 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:13:05.992267 | orchestrator | 2026-02-04 01:13:05.992274 | orchestrator | 2026-02-04 01:13:05.992280 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:13:05.992284 | orchestrator | Wednesday 04 February 2026 01:12:29 +0000 (0:00:01.465) 0:00:46.856 **** 2026-02-04 01:13:05.992288 | orchestrator | =============================================================================== 2026-02-04 01:13:05.992292 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 38.49s 2026-02-04 01:13:05.992296 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.50s 2026-02-04 01:13:05.992313 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.47s 2026-02-04 01:13:05.992362 | orchestrator | Ensure the destination directory exists --------------------------------- 1.45s 2026-02-04 01:13:05.992395 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2026-02-04 01:13:05.992403 | orchestrator | 2026-02-04 01:13:05.992409 | orchestrator | 2026-02-04 01:13:05.992416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:13:05.992420 | orchestrator | 2026-02-04 01:13:05.992433 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:13:05.992439 | orchestrator | Wednesday 04 February 2026 01:09:38 +0000 (0:00:00.281) 0:00:00.281 **** 2026-02-04 01:13:05.992493 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:05.992499 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:13:05.992503 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:13:05.992507 | orchestrator | 2026-02-04 01:13:05.992510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:13:05.992514 | orchestrator | Wednesday 04 February 2026 01:09:39 +0000 (0:00:00.347) 0:00:00.628 **** 2026-02-04 01:13:05.992518 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-04 01:13:05.992522 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-04 01:13:05.992526 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-04 01:13:05.992529 | orchestrator | 2026-02-04 01:13:05.992533 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-04 01:13:05.992537 | orchestrator | 2026-02-04 01:13:05.992541 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 01:13:05.992545 | orchestrator | Wednesday 04 February 2026 01:09:39 +0000 (0:00:00.473) 0:00:01.101 **** 2026-02-04 01:13:05.992549 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:13:05.992553 | orchestrator | 2026-02-04 01:13:05.992557 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-04 01:13:05.992561 | orchestrator | Wednesday 04 February 2026 01:09:40 +0000 (0:00:00.636) 0:00:01.738 **** 2026-02-04 01:13:05.992575 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-04 01:13:05.992580 | orchestrator | 2026-02-04 01:13:05.992583 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-04 01:13:05.992589 | orchestrator | Wednesday 04 February 2026 01:09:43 +0000 (0:00:03.088) 0:00:04.826 **** 2026-02-04 01:13:05.992597 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-04 01:13:05.992607 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-04 01:13:05.992614 | orchestrator | 2026-02-04 01:13:05.992620 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-04 01:13:05.992626 | orchestrator | Wednesday 04 February 2026 01:09:50 +0000 (0:00:06.949) 0:00:11.775 **** 2026-02-04 01:13:05.992632 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:13:05.992637 | orchestrator | 2026-02-04 01:13:05.992642 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-04 01:13:05.992648 | orchestrator | Wednesday 04 February 2026 01:09:54 +0000 (0:00:03.655) 0:00:15.431 **** 2026-02-04 01:13:05.992654 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-04 01:13:05.992660 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:13:05.992666 | orchestrator | 2026-02-04 01:13:05.992671 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-04 01:13:05.992677 | orchestrator | Wednesday 04 February 2026 01:09:58 +0000 (0:00:04.214) 0:00:19.646 **** 2026-02-04 01:13:05.992683 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:13:05.992689 | orchestrator | 2026-02-04 01:13:05.992695 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-04 01:13:05.992710 | orchestrator | Wednesday 04 February 2026 01:10:02 +0000 (0:00:03.721) 0:00:23.367 **** 2026-02-04 01:13:05.992716 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-04 01:13:05.992722 | orchestrator | 2026-02-04 01:13:05.992728 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-04 01:13:05.992734 | orchestrator | Wednesday 04 February 2026 01:10:06 +0000 (0:00:04.099) 0:00:27.467 **** 2026-02-04 01:13:05.992756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.992772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.992786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.992794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.992950 | orchestrator | 2026-02-04 01:13:05.992954 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-04 01:13:05.992959 | orchestrator | Wednesday 04 February 2026 01:10:09 +0000 (0:00:03.471) 0:00:30.938 **** 2026-02-04 01:13:05.992963 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:05.992967 | orchestrator | 2026-02-04 01:13:05.992971 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-04 01:13:05.992975 | orchestrator | Wednesday 04 February 2026 01:10:09 +0000 (0:00:00.222) 0:00:31.161 **** 2026-02-04 01:13:05.992979 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:05.992994 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:05.992998 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:05.993006 | orchestrator | 2026-02-04 01:13:05.993026 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 01:13:05.993030 | orchestrator | Wednesday 04 February 2026 01:10:10 +0000 (0:00:00.823) 0:00:31.984 **** 2026-02-04 01:13:05.993034 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:13:05.993038 | orchestrator | 2026-02-04 01:13:05.993042 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-04 01:13:05.993046 | orchestrator | Wednesday 04 February 2026 01:10:12 +0000 (0:00:02.120) 0:00:34.105 **** 2026-02-04 01:13:05.993050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.993057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.993065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.993073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993180 | orchestrator | 2026-02-04 01:13:05.993184 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-04 01:13:05.993188 | orchestrator | Wednesday 04 February 2026 01:10:20 +0000 (0:00:07.978) 0:00:42.083 **** 2026-02-04 01:13:05.993192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.993196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:13:05.993202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.993565 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:05.993571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:13:05.993582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993618 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:05.993624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.993630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:13:05.993639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993672 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:05.993678 | orchestrator | 2026-02-04 01:13:05.993685 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-04 01:13:05.993691 | orchestrator | Wednesday 04 February 2026 01:10:23 +0000 (0:00:02.882) 0:00:44.966 **** 2026-02-04 01:13:05.993695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.993699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:13:05.993705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993729 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:05.993733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.993737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:13:05.993741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993764 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:05.993768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.993772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:13:05.993800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.993825 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:05.993829 | orchestrator | 2026-02-04 01:13:05.993833 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-04 01:13:05.993837 | orchestrator | Wednesday 04 February 2026 01:10:25 +0000 (0:00:02.167) 0:00:47.133 **** 2026-02-04 01:13:05.993841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.993845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.993853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.993864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.993939 | orchestrator | 2026-02-04 01:13:05.993945 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-04 01:13:05.993949 | orchestrator | Wednesday 04 February 2026 01:10:33 +0000 (0:00:07.692) 0:00:54.825 **** 2026-02-04 01:13:05.993961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.993970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.993985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.993994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994142 | orchestrator | 2026-02-04 01:13:05.994146 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-04 01:13:05.994151 | orchestrator | Wednesday 04 February 2026 01:10:59 +0000 (0:00:26.235) 0:01:21.061 **** 2026-02-04 01:13:05.994155 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-04 01:13:05.994160 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-04 01:13:05.994165 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-04 01:13:05.994169 | orchestrator | 2026-02-04 01:13:05.994176 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-04 01:13:05.994181 | orchestrator | Wednesday 04 February 2026 01:11:06 +0000 (0:00:06.428) 0:01:27.490 **** 2026-02-04 01:13:05.994185 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-04 01:13:05.994190 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-04 01:13:05.994194 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-04 01:13:05.994199 | orchestrator | 2026-02-04 01:13:05.994203 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-04 01:13:05.994208 | orchestrator | Wednesday 04 February 2026 01:11:09 +0000 (0:00:03.469) 0:01:30.959 **** 2026-02-04 01:13:05.994215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.994221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.994227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.994232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994341 | orchestrator | 2026-02-04 01:13:05.994351 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-04 01:13:05.994368 | orchestrator | Wednesday 04 February 2026 01:11:13 +0000 (0:00:03.842) 0:01:34.801 **** 2026-02-04 01:13:05.994376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.994383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.994389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.994399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994520 | orchestrator | 2026-02-04 01:13:05.994524 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 01:13:05.994528 | orchestrator | Wednesday 04 February 2026 01:11:17 +0000 (0:00:03.941) 0:01:38.743 **** 2026-02-04 01:13:05.994532 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:05.994536 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:05.994540 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:05.994544 | orchestrator | 2026-02-04 01:13:05.994548 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-04 01:13:05.994552 | orchestrator | Wednesday 04 February 2026 01:11:18 +0000 (0:00:01.365) 0:01:40.108 **** 2026-02-04 01:13:05.994559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.994564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:13:05.994568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994590 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:05.994598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.994602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:13:05.994606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994627 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:05.994634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:13:05.994638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:13:05.994642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:05.994662 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:05.994666 | orchestrator | 2026-02-04 01:13:05.994670 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-04 01:13:05.994674 | orchestrator | Wednesday 04 February 2026 01:11:20 +0000 (0:00:01.948) 0:01:42.057 **** 2026-02-04 01:13:05.994681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.994686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.994690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:13:05.994694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:05.994774 | orchestrator | 2026-02-04 01:13:05.994777 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 01:13:05.994781 | orchestrator | Wednesday 04 February 2026 01:11:25 +0000 (0:00:04.605) 0:01:46.662 **** 2026-02-04 01:13:05.994785 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:05.994789 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:05.994793 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:05.994797 | orchestrator | 2026-02-04 01:13:05.994801 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-04 01:13:05.994804 | orchestrator | Wednesday 04 February 2026 01:11:26 +0000 (0:00:00.855) 0:01:47.518 **** 2026-02-04 01:13:05.994808 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-04 01:13:05.994812 | orchestrator | 2026-02-04 01:13:05.994816 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-04 01:13:05.994820 | orchestrator | Wednesday 04 February 2026 01:11:28 +0000 (0:00:02.305) 0:01:49.824 **** 2026-02-04 01:13:05.994824 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:13:05.994828 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-04 01:13:05.994832 | orchestrator | 2026-02-04 01:13:05.994836 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-04 01:13:05.994840 | orchestrator | Wednesday 04 February 2026 01:11:30 +0000 (0:00:02.395) 0:01:52.219 **** 2026-02-04 01:13:05.994844 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:05.994847 | orchestrator | 2026-02-04 01:13:05.994851 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-04 01:13:05.994858 | orchestrator | Wednesday 04 February 2026 01:11:47 +0000 (0:00:16.918) 0:02:09.138 **** 2026-02-04 01:13:05.994862 | orchestrator | 2026-02-04 01:13:05.994866 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-04 01:13:05.994869 | orchestrator | Wednesday 04 February 2026 01:11:48 +0000 (0:00:00.364) 0:02:09.503 **** 2026-02-04 01:13:05.994873 | orchestrator | 2026-02-04 01:13:05.994879 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-04 01:13:05.994885 | orchestrator | Wednesday 04 February 2026 01:11:48 +0000 (0:00:00.246) 0:02:09.750 **** 2026-02-04 01:13:05.994891 | orchestrator | 2026-02-04 01:13:05.994895 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-04 01:13:05.994899 | orchestrator | Wednesday 04 February 2026 01:11:48 +0000 (0:00:00.159) 0:02:09.909 **** 2026-02-04 01:13:05.994902 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:05.994906 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:05.994910 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:05.994914 | orchestrator | 2026-02-04 01:13:05.994918 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-04 01:13:05.994921 | orchestrator | Wednesday 04 February 2026 01:12:02 +0000 (0:00:14.300) 0:02:24.210 **** 2026-02-04 01:13:05.994925 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:05.994929 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:05.994936 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:05.994939 | orchestrator | 2026-02-04 01:13:05.994943 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-04 01:13:05.994948 | orchestrator | Wednesday 04 February 2026 01:12:15 +0000 (0:00:13.031) 0:02:37.241 **** 2026-02-04 01:13:05.994955 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:05.994965 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:05.994970 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:05.994976 | orchestrator | 2026-02-04 01:13:05.994982 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-04 01:13:05.994987 | orchestrator | Wednesday 04 February 2026 01:12:26 +0000 (0:00:10.279) 0:02:47.520 **** 2026-02-04 01:13:05.994993 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:05.994999 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:05.995005 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:05.995040 | orchestrator | 2026-02-04 01:13:05.995046 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-04 01:13:05.995052 | orchestrator | Wednesday 04 February 2026 01:12:37 +0000 (0:00:11.453) 0:02:58.973 **** 2026-02-04 01:13:05.995059 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:05.995064 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:05.995070 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:05.995075 | orchestrator | 2026-02-04 01:13:05.995081 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-04 01:13:05.995086 | orchestrator | Wednesday 04 February 2026 01:12:46 +0000 (0:00:09.025) 0:03:07.999 **** 2026-02-04 01:13:05.995092 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:05.995099 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:05.995105 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:05.995110 | orchestrator | 2026-02-04 01:13:05.995117 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-04 01:13:05.995123 | orchestrator | Wednesday 04 February 2026 01:12:55 +0000 (0:00:08.878) 0:03:16.878 **** 2026-02-04 01:13:05.995128 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:05.995134 | orchestrator | 2026-02-04 01:13:05.995140 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:13:05.995147 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:13:05.995155 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:13:05.995161 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:13:05.995168 | orchestrator | 2026-02-04 01:13:05.995173 | orchestrator | 2026-02-04 01:13:05.995177 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:13:05.995181 | orchestrator | Wednesday 04 February 2026 01:13:02 +0000 (0:00:07.145) 0:03:24.024 **** 2026-02-04 01:13:05.995188 | orchestrator | =============================================================================== 2026-02-04 01:13:05.995192 | orchestrator | designate : Copying over designate.conf -------------------------------- 26.24s 2026-02-04 01:13:05.995196 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.92s 2026-02-04 01:13:05.995200 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.30s 2026-02-04 01:13:05.995204 | orchestrator | designate : Restart designate-api container ---------------------------- 13.03s 2026-02-04 01:13:05.995208 | orchestrator | designate : Restart designate-producer container ----------------------- 11.45s 2026-02-04 01:13:05.995212 | orchestrator | designate : Restart designate-central container ------------------------ 10.28s 2026-02-04 01:13:05.995215 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.03s 2026-02-04 01:13:05.995219 | orchestrator | designate : Restart designate-worker container -------------------------- 8.88s 2026-02-04 01:13:05.995227 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.98s 2026-02-04 01:13:05.995231 | orchestrator | designate : Copying over config.json files for services ----------------- 7.69s 2026-02-04 01:13:05.995235 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.15s 2026-02-04 01:13:05.995239 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.95s 2026-02-04 01:13:05.995243 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.43s 2026-02-04 01:13:05.995252 | orchestrator | designate : Check designate containers ---------------------------------- 4.61s 2026-02-04 01:13:05.995256 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.21s 2026-02-04 01:13:05.995260 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.10s 2026-02-04 01:13:05.995267 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.94s 2026-02-04 01:13:05.995277 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.84s 2026-02-04 01:13:05.995284 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.72s 2026-02-04 01:13:05.995290 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.66s 2026-02-04 01:13:05.995297 | orchestrator | 2026-02-04 01:13:05 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:05.995303 | orchestrator | 2026-02-04 01:13:05 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:05.995308 | orchestrator | 2026-02-04 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:09.047849 | orchestrator | 2026-02-04 01:13:09 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:09.051446 | orchestrator | 2026-02-04 01:13:09 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:09.051570 | orchestrator | 2026-02-04 01:13:09 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:09.053329 | orchestrator | 2026-02-04 01:13:09 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:09.053377 | orchestrator | 2026-02-04 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:12.082532 | orchestrator | 2026-02-04 01:13:12 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:12.083389 | orchestrator | 2026-02-04 01:13:12 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:12.084032 | orchestrator | 2026-02-04 01:13:12 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:12.085083 | orchestrator | 2026-02-04 01:13:12 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:12.085118 | orchestrator | 2026-02-04 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:15.113489 | orchestrator | 2026-02-04 01:13:15 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:15.114889 | orchestrator | 2026-02-04 01:13:15 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:15.116469 | orchestrator | 2026-02-04 01:13:15 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:15.117209 | orchestrator | 2026-02-04 01:13:15 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:15.117249 | orchestrator | 2026-02-04 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:18.155715 | orchestrator | 2026-02-04 01:13:18 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:18.156352 | orchestrator | 2026-02-04 01:13:18 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:18.157343 | orchestrator | 2026-02-04 01:13:18 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:18.158300 | orchestrator | 2026-02-04 01:13:18 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:18.158331 | orchestrator | 2026-02-04 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:21.193417 | orchestrator | 2026-02-04 01:13:21 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:21.194369 | orchestrator | 2026-02-04 01:13:21 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:21.195369 | orchestrator | 2026-02-04 01:13:21 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:21.196344 | orchestrator | 2026-02-04 01:13:21 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:21.196386 | orchestrator | 2026-02-04 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:24.233111 | orchestrator | 2026-02-04 01:13:24 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:24.237972 | orchestrator | 2026-02-04 01:13:24 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:24.243088 | orchestrator | 2026-02-04 01:13:24 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:24.246409 | orchestrator | 2026-02-04 01:13:24 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:24.246673 | orchestrator | 2026-02-04 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:27.276603 | orchestrator | 2026-02-04 01:13:27 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:27.277756 | orchestrator | 2026-02-04 01:13:27 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:27.278668 | orchestrator | 2026-02-04 01:13:27 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:27.279399 | orchestrator | 2026-02-04 01:13:27 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:27.279475 | orchestrator | 2026-02-04 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:30.324627 | orchestrator | 2026-02-04 01:13:30 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:30.326598 | orchestrator | 2026-02-04 01:13:30 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:30.329106 | orchestrator | 2026-02-04 01:13:30 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:30.332206 | orchestrator | 2026-02-04 01:13:30 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:30.332250 | orchestrator | 2026-02-04 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:33.369605 | orchestrator | 2026-02-04 01:13:33 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:33.370419 | orchestrator | 2026-02-04 01:13:33 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:33.372589 | orchestrator | 2026-02-04 01:13:33 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:33.374651 | orchestrator | 2026-02-04 01:13:33 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:33.374729 | orchestrator | 2026-02-04 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:36.434873 | orchestrator | 2026-02-04 01:13:36 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:36.436423 | orchestrator | 2026-02-04 01:13:36 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:36.437847 | orchestrator | 2026-02-04 01:13:36 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:36.439149 | orchestrator | 2026-02-04 01:13:36 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:36.439174 | orchestrator | 2026-02-04 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:39.487204 | orchestrator | 2026-02-04 01:13:39 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:39.488341 | orchestrator | 2026-02-04 01:13:39 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:39.489425 | orchestrator | 2026-02-04 01:13:39 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:39.491782 | orchestrator | 2026-02-04 01:13:39 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:39.491822 | orchestrator | 2026-02-04 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:42.537638 | orchestrator | 2026-02-04 01:13:42 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:42.540293 | orchestrator | 2026-02-04 01:13:42 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:42.543203 | orchestrator | 2026-02-04 01:13:42 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:42.545403 | orchestrator | 2026-02-04 01:13:42 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:42.545488 | orchestrator | 2026-02-04 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:45.604360 | orchestrator | 2026-02-04 01:13:45 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:45.606281 | orchestrator | 2026-02-04 01:13:45 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:45.607615 | orchestrator | 2026-02-04 01:13:45 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:45.610576 | orchestrator | 2026-02-04 01:13:45 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:45.611407 | orchestrator | 2026-02-04 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:48.653686 | orchestrator | 2026-02-04 01:13:48 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:48.654373 | orchestrator | 2026-02-04 01:13:48 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:48.655599 | orchestrator | 2026-02-04 01:13:48 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:48.656422 | orchestrator | 2026-02-04 01:13:48 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:48.656458 | orchestrator | 2026-02-04 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:51.732254 | orchestrator | 2026-02-04 01:13:51 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:51.733814 | orchestrator | 2026-02-04 01:13:51 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:51.735308 | orchestrator | 2026-02-04 01:13:51 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state STARTED 2026-02-04 01:13:51.737722 | orchestrator | 2026-02-04 01:13:51 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:51.737762 | orchestrator | 2026-02-04 01:13:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:54.781279 | orchestrator | 2026-02-04 01:13:54 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:54.782486 | orchestrator | 2026-02-04 01:13:54 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:54.784843 | orchestrator | 2026-02-04 01:13:54 | INFO  | Task 8295314f-b4ec-414a-98c5-d5e4cbe91fc3 is in state SUCCESS 2026-02-04 01:13:54.786625 | orchestrator | 2026-02-04 01:13:54.786671 | orchestrator | 2026-02-04 01:13:54.786681 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:13:54.786690 | orchestrator | 2026-02-04 01:13:54.786698 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:13:54.786706 | orchestrator | Wednesday 04 February 2026 01:12:37 +0000 (0:00:00.442) 0:00:00.442 **** 2026-02-04 01:13:54.786714 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:54.786723 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:13:54.786730 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:13:54.786738 | orchestrator | 2026-02-04 01:13:54.786745 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:13:54.786753 | orchestrator | Wednesday 04 February 2026 01:12:38 +0000 (0:00:01.146) 0:00:01.589 **** 2026-02-04 01:13:54.786761 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-04 01:13:54.786768 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-04 01:13:54.786776 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-04 01:13:54.786783 | orchestrator | 2026-02-04 01:13:54.786791 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-04 01:13:54.786799 | orchestrator | 2026-02-04 01:13:54.786807 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-04 01:13:54.786815 | orchestrator | Wednesday 04 February 2026 01:12:40 +0000 (0:00:01.074) 0:00:02.663 **** 2026-02-04 01:13:54.786823 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:13:54.786831 | orchestrator | 2026-02-04 01:13:54.786838 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-04 01:13:54.786845 | orchestrator | Wednesday 04 February 2026 01:12:41 +0000 (0:00:01.410) 0:00:04.074 **** 2026-02-04 01:13:54.786863 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-04 01:13:54.786871 | orchestrator | 2026-02-04 01:13:54.786878 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-04 01:13:54.786886 | orchestrator | Wednesday 04 February 2026 01:12:46 +0000 (0:00:04.798) 0:00:08.872 **** 2026-02-04 01:13:54.786893 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-04 01:13:54.786900 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-04 01:13:54.786907 | orchestrator | 2026-02-04 01:13:54.786915 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-04 01:13:54.786922 | orchestrator | Wednesday 04 February 2026 01:12:52 +0000 (0:00:06.701) 0:00:15.574 **** 2026-02-04 01:13:54.786929 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:13:54.786937 | orchestrator | 2026-02-04 01:13:54.786944 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-04 01:13:54.786952 | orchestrator | Wednesday 04 February 2026 01:12:56 +0000 (0:00:03.575) 0:00:19.150 **** 2026-02-04 01:13:54.786959 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-04 01:13:54.786967 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:13:54.786974 | orchestrator | 2026-02-04 01:13:54.787037 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-04 01:13:54.787044 | orchestrator | Wednesday 04 February 2026 01:13:00 +0000 (0:00:03.770) 0:00:22.920 **** 2026-02-04 01:13:54.787048 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:13:54.787053 | orchestrator | 2026-02-04 01:13:54.787057 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-04 01:13:54.787062 | orchestrator | Wednesday 04 February 2026 01:13:03 +0000 (0:00:03.671) 0:00:26.591 **** 2026-02-04 01:13:54.787066 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-04 01:13:54.787070 | orchestrator | 2026-02-04 01:13:54.787075 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-04 01:13:54.787079 | orchestrator | Wednesday 04 February 2026 01:13:07 +0000 (0:00:03.606) 0:00:30.198 **** 2026-02-04 01:13:54.787084 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:54.787088 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:54.787093 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:54.787097 | orchestrator | 2026-02-04 01:13:54.787101 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-04 01:13:54.787106 | orchestrator | Wednesday 04 February 2026 01:13:07 +0000 (0:00:00.324) 0:00:30.522 **** 2026-02-04 01:13:54.787112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787159 | orchestrator | 2026-02-04 01:13:54.787168 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-04 01:13:54.787174 | orchestrator | Wednesday 04 February 2026 01:13:09 +0000 (0:00:01.243) 0:00:31.765 **** 2026-02-04 01:13:54.787181 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:54.787188 | orchestrator | 2026-02-04 01:13:54.787195 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-04 01:13:54.787202 | orchestrator | Wednesday 04 February 2026 01:13:09 +0000 (0:00:00.319) 0:00:32.084 **** 2026-02-04 01:13:54.787209 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:54.787215 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:54.787221 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:54.787227 | orchestrator | 2026-02-04 01:13:54.787234 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-04 01:13:54.787241 | orchestrator | Wednesday 04 February 2026 01:13:10 +0000 (0:00:01.136) 0:00:33.221 **** 2026-02-04 01:13:54.787248 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:13:54.787255 | orchestrator | 2026-02-04 01:13:54.787262 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-04 01:13:54.787269 | orchestrator | Wednesday 04 February 2026 01:13:11 +0000 (0:00:00.819) 0:00:34.040 **** 2026-02-04 01:13:54.787277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787314 | orchestrator | 2026-02-04 01:13:54.787326 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-04 01:13:54.787332 | orchestrator | Wednesday 04 February 2026 01:13:13 +0000 (0:00:02.049) 0:00:36.090 **** 2026-02-04 01:13:54.787336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:13:54.787341 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:54.787348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:13:54.787359 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:54.787372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:13:54.787380 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:54.787387 | orchestrator | 2026-02-04 01:13:54.787395 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-04 01:13:54.787402 | orchestrator | Wednesday 04 February 2026 01:13:14 +0000 (0:00:01.359) 0:00:37.450 **** 2026-02-04 01:13:54.787410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:13:54.787423 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:54.787434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:13:54.787441 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:54.787448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:13:54.787456 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:54.787463 | orchestrator | 2026-02-04 01:13:54.787470 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-04 01:13:54.787477 | orchestrator | Wednesday 04 February 2026 01:13:16 +0000 (0:00:01.427) 0:00:38.877 **** 2026-02-04 01:13:54.787489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787519 | orchestrator | 2026-02-04 01:13:54.787526 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-04 01:13:54.787534 | orchestrator | Wednesday 04 February 2026 01:13:18 +0000 (0:00:02.213) 0:00:41.090 **** 2026-02-04 01:13:54.787541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787573 | orchestrator | 2026-02-04 01:13:54.787581 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-04 01:13:54.787588 | orchestrator | Wednesday 04 February 2026 01:13:23 +0000 (0:00:05.149) 0:00:46.240 **** 2026-02-04 01:13:54.787595 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-04 01:13:54.787603 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-04 01:13:54.787611 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-04 01:13:54.787618 | orchestrator | 2026-02-04 01:13:54.787626 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-04 01:13:54.787633 | orchestrator | Wednesday 04 February 2026 01:13:25 +0000 (0:00:01.526) 0:00:47.767 **** 2026-02-04 01:13:54.787676 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:54.787685 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:54.787692 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:54.787700 | orchestrator | 2026-02-04 01:13:54.787707 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-04 01:13:54.787714 | orchestrator | Wednesday 04 February 2026 01:13:26 +0000 (0:00:01.347) 0:00:49.114 **** 2026-02-04 01:13:54.787722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:13:54.787730 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:54.787738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:13:54.787745 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:54.787757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:13:54.787770 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:54.787777 | orchestrator | 2026-02-04 01:13:54.787785 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-04 01:13:54.787792 | orchestrator | Wednesday 04 February 2026 01:13:27 +0000 (0:00:00.711) 0:00:49.826 **** 2026-02-04 01:13:54.787802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:54.787826 | orchestrator | 2026-02-04 01:13:54.787834 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-04 01:13:54.787841 | orchestrator | Wednesday 04 February 2026 01:13:28 +0000 (0:00:01.499) 0:00:51.326 **** 2026-02-04 01:13:54.787848 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:54.787856 | orchestrator | 2026-02-04 01:13:54.787863 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-04 01:13:54.787879 | orchestrator | Wednesday 04 February 2026 01:13:31 +0000 (0:00:02.815) 0:00:54.141 **** 2026-02-04 01:13:54.787886 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:54.787894 | orchestrator | 2026-02-04 01:13:54.787901 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-04 01:13:54.787909 | orchestrator | Wednesday 04 February 2026 01:13:33 +0000 (0:00:02.102) 0:00:56.244 **** 2026-02-04 01:13:54.787916 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:54.787923 | orchestrator | 2026-02-04 01:13:54.787931 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-04 01:13:54.787938 | orchestrator | Wednesday 04 February 2026 01:13:45 +0000 (0:00:11.753) 0:01:07.998 **** 2026-02-04 01:13:54.787945 | orchestrator | 2026-02-04 01:13:54.787953 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-04 01:13:54.787960 | orchestrator | Wednesday 04 February 2026 01:13:45 +0000 (0:00:00.280) 0:01:08.278 **** 2026-02-04 01:13:54.787968 | orchestrator | 2026-02-04 01:13:54.787979 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-04 01:13:54.787986 | orchestrator | Wednesday 04 February 2026 01:13:45 +0000 (0:00:00.168) 0:01:08.447 **** 2026-02-04 01:13:54.788004 | orchestrator | 2026-02-04 01:13:54.788011 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-04 01:13:54.788019 | orchestrator | Wednesday 04 February 2026 01:13:45 +0000 (0:00:00.168) 0:01:08.615 **** 2026-02-04 01:13:54.788026 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:54.788034 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:54.788041 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:54.788049 | orchestrator | 2026-02-04 01:13:54.788056 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:13:54.788064 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:13:54.788072 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:13:54.788080 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:13:54.788087 | orchestrator | 2026-02-04 01:13:54.788095 | orchestrator | 2026-02-04 01:13:54.788102 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:13:54.788110 | orchestrator | Wednesday 04 February 2026 01:13:52 +0000 (0:00:06.617) 0:01:15.232 **** 2026-02-04 01:13:54.788118 | orchestrator | =============================================================================== 2026-02-04 01:13:54.788125 | orchestrator | placement : Running placement bootstrap container ---------------------- 11.75s 2026-02-04 01:13:54.788136 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.70s 2026-02-04 01:13:54.788144 | orchestrator | placement : Restart placement-api container ----------------------------- 6.62s 2026-02-04 01:13:54.788151 | orchestrator | placement : Copying over placement.conf --------------------------------- 5.15s 2026-02-04 01:13:54.788159 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.80s 2026-02-04 01:13:54.788167 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.77s 2026-02-04 01:13:54.788175 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.67s 2026-02-04 01:13:54.788182 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.61s 2026-02-04 01:13:54.788189 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.58s 2026-02-04 01:13:54.788196 | orchestrator | placement : Creating placement databases -------------------------------- 2.82s 2026-02-04 01:13:54.788204 | orchestrator | placement : Copying over config.json files for services ----------------- 2.21s 2026-02-04 01:13:54.788211 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.10s 2026-02-04 01:13:54.788223 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.05s 2026-02-04 01:13:54.788230 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.53s 2026-02-04 01:13:54.788238 | orchestrator | placement : Check placement containers ---------------------------------- 1.50s 2026-02-04 01:13:54.788245 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.43s 2026-02-04 01:13:54.788252 | orchestrator | placement : include_tasks ----------------------------------------------- 1.41s 2026-02-04 01:13:54.788257 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.36s 2026-02-04 01:13:54.788261 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.35s 2026-02-04 01:13:54.788266 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.24s 2026-02-04 01:13:54.788340 | orchestrator | 2026-02-04 01:13:54 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:54.790143 | orchestrator | 2026-02-04 01:13:54 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:13:54.790184 | orchestrator | 2026-02-04 01:13:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:57.835976 | orchestrator | 2026-02-04 01:13:57 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:13:57.837928 | orchestrator | 2026-02-04 01:13:57 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:13:57.839669 | orchestrator | 2026-02-04 01:13:57 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:13:57.841422 | orchestrator | 2026-02-04 01:13:57 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:13:57.841454 | orchestrator | 2026-02-04 01:13:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:00.871033 | orchestrator | 2026-02-04 01:14:00 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:00.873304 | orchestrator | 2026-02-04 01:14:00 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:00.875314 | orchestrator | 2026-02-04 01:14:00 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:14:00.876824 | orchestrator | 2026-02-04 01:14:00 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:00.876869 | orchestrator | 2026-02-04 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:03.918438 | orchestrator | 2026-02-04 01:14:03 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:03.919449 | orchestrator | 2026-02-04 01:14:03 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:03.922172 | orchestrator | 2026-02-04 01:14:03 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:14:03.923260 | orchestrator | 2026-02-04 01:14:03 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:03.924558 | orchestrator | 2026-02-04 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:06.956744 | orchestrator | 2026-02-04 01:14:06 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:06.958884 | orchestrator | 2026-02-04 01:14:06 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:06.961889 | orchestrator | 2026-02-04 01:14:06 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:14:06.964206 | orchestrator | 2026-02-04 01:14:06 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:06.964619 | orchestrator | 2026-02-04 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:10.013134 | orchestrator | 2026-02-04 01:14:10 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:10.013660 | orchestrator | 2026-02-04 01:14:10 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:10.014656 | orchestrator | 2026-02-04 01:14:10 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:14:10.015474 | orchestrator | 2026-02-04 01:14:10 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:10.015499 | orchestrator | 2026-02-04 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:13.054147 | orchestrator | 2026-02-04 01:14:13 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:13.055429 | orchestrator | 2026-02-04 01:14:13 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:13.056701 | orchestrator | 2026-02-04 01:14:13 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:14:13.057956 | orchestrator | 2026-02-04 01:14:13 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:13.058040 | orchestrator | 2026-02-04 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:16.099263 | orchestrator | 2026-02-04 01:14:16 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:16.101600 | orchestrator | 2026-02-04 01:14:16 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:16.104210 | orchestrator | 2026-02-04 01:14:16 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:14:16.106413 | orchestrator | 2026-02-04 01:14:16 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:16.106527 | orchestrator | 2026-02-04 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:19.154634 | orchestrator | 2026-02-04 01:14:19 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:19.156765 | orchestrator | 2026-02-04 01:14:19 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:19.159325 | orchestrator | 2026-02-04 01:14:19 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:14:19.162385 | orchestrator | 2026-02-04 01:14:19 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:19.162427 | orchestrator | 2026-02-04 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:22.204584 | orchestrator | 2026-02-04 01:14:22 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:22.204808 | orchestrator | 2026-02-04 01:14:22 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:22.207135 | orchestrator | 2026-02-04 01:14:22 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state STARTED 2026-02-04 01:14:22.208210 | orchestrator | 2026-02-04 01:14:22 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:22.208241 | orchestrator | 2026-02-04 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:25.241166 | orchestrator | 2026-02-04 01:14:25 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:25.242649 | orchestrator | 2026-02-04 01:14:25 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:25.246782 | orchestrator | 2026-02-04 01:14:25 | INFO  | Task 6de41111-19cd-4ecd-8571-65308ec3f489 is in state SUCCESS 2026-02-04 01:14:25.248903 | orchestrator | 2026-02-04 01:14:25.248947 | orchestrator | 2026-02-04 01:14:25.248953 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:14:25.248958 | orchestrator | 2026-02-04 01:14:25.248996 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:14:25.249003 | orchestrator | Wednesday 04 February 2026 01:09:18 +0000 (0:00:00.298) 0:00:00.298 **** 2026-02-04 01:14:25.249010 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:14:25.249019 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:14:25.249026 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:14:25.249033 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:14:25.249038 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:14:25.249042 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:14:25.249047 | orchestrator | 2026-02-04 01:14:25.249051 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:14:25.249055 | orchestrator | Wednesday 04 February 2026 01:09:19 +0000 (0:00:00.795) 0:00:01.093 **** 2026-02-04 01:14:25.249059 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-04 01:14:25.249063 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-04 01:14:25.249067 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-04 01:14:25.249075 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-04 01:14:25.249079 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-04 01:14:25.249083 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-04 01:14:25.249087 | orchestrator | 2026-02-04 01:14:25.249090 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-04 01:14:25.249095 | orchestrator | 2026-02-04 01:14:25.249098 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 01:14:25.249102 | orchestrator | Wednesday 04 February 2026 01:09:20 +0000 (0:00:00.713) 0:00:01.807 **** 2026-02-04 01:14:25.249106 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:14:25.249111 | orchestrator | 2026-02-04 01:14:25.249115 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-04 01:14:25.249119 | orchestrator | Wednesday 04 February 2026 01:09:21 +0000 (0:00:01.354) 0:00:03.161 **** 2026-02-04 01:14:25.249123 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:14:25.249130 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:14:25.249136 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:14:25.249142 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:14:25.249149 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:14:25.249155 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:14:25.249160 | orchestrator | 2026-02-04 01:14:25.249167 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-04 01:14:25.249173 | orchestrator | Wednesday 04 February 2026 01:09:23 +0000 (0:00:01.707) 0:00:04.869 **** 2026-02-04 01:14:25.249180 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:14:25.249186 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:14:25.249192 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:14:25.249199 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:14:25.249204 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:14:25.249208 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:14:25.249212 | orchestrator | 2026-02-04 01:14:25.249215 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-04 01:14:25.249219 | orchestrator | Wednesday 04 February 2026 01:09:24 +0000 (0:00:01.525) 0:00:06.395 **** 2026-02-04 01:14:25.249223 | orchestrator | ok: [testbed-node-0] => { 2026-02-04 01:14:25.249229 | orchestrator |  "changed": false, 2026-02-04 01:14:25.249235 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:14:25.249241 | orchestrator | } 2026-02-04 01:14:25.249297 | orchestrator | ok: [testbed-node-1] => { 2026-02-04 01:14:25.249305 | orchestrator |  "changed": false, 2026-02-04 01:14:25.249324 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:14:25.249330 | orchestrator | } 2026-02-04 01:14:25.249337 | orchestrator | ok: [testbed-node-2] => { 2026-02-04 01:14:25.249342 | orchestrator |  "changed": false, 2026-02-04 01:14:25.249348 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:14:25.249355 | orchestrator | } 2026-02-04 01:14:25.249361 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 01:14:25.249367 | orchestrator |  "changed": false, 2026-02-04 01:14:25.249373 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:14:25.249379 | orchestrator | } 2026-02-04 01:14:25.249385 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 01:14:25.249391 | orchestrator |  "changed": false, 2026-02-04 01:14:25.249396 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:14:25.249402 | orchestrator | } 2026-02-04 01:14:25.249407 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 01:14:25.249413 | orchestrator |  "changed": false, 2026-02-04 01:14:25.249420 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:14:25.249426 | orchestrator | } 2026-02-04 01:14:25.249431 | orchestrator | 2026-02-04 01:14:25.249438 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-04 01:14:25.249444 | orchestrator | Wednesday 04 February 2026 01:09:26 +0000 (0:00:01.325) 0:00:07.721 **** 2026-02-04 01:14:25.249450 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.249456 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.249462 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.249467 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.249473 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.249479 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.249485 | orchestrator | 2026-02-04 01:14:25.249490 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-04 01:14:25.249496 | orchestrator | Wednesday 04 February 2026 01:09:27 +0000 (0:00:00.775) 0:00:08.496 **** 2026-02-04 01:14:25.249502 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-04 01:14:25.249508 | orchestrator | 2026-02-04 01:14:25.249515 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-04 01:14:25.249521 | orchestrator | Wednesday 04 February 2026 01:09:30 +0000 (0:00:03.325) 0:00:11.822 **** 2026-02-04 01:14:25.249527 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-04 01:14:25.249535 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-04 01:14:25.249541 | orchestrator | 2026-02-04 01:14:25.249560 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-04 01:14:25.249567 | orchestrator | Wednesday 04 February 2026 01:09:36 +0000 (0:00:05.793) 0:00:17.615 **** 2026-02-04 01:14:25.249574 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:14:25.249581 | orchestrator | 2026-02-04 01:14:25.249587 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-04 01:14:25.249593 | orchestrator | Wednesday 04 February 2026 01:09:39 +0000 (0:00:03.435) 0:00:21.051 **** 2026-02-04 01:14:25.249599 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-04 01:14:25.249606 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:14:25.249612 | orchestrator | 2026-02-04 01:14:25.249619 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-04 01:14:25.249626 | orchestrator | Wednesday 04 February 2026 01:09:43 +0000 (0:00:03.782) 0:00:24.834 **** 2026-02-04 01:14:25.249632 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:14:25.249638 | orchestrator | 2026-02-04 01:14:25.249645 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-04 01:14:25.249656 | orchestrator | Wednesday 04 February 2026 01:09:46 +0000 (0:00:03.556) 0:00:28.391 **** 2026-02-04 01:14:25.249663 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-04 01:14:25.249676 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-04 01:14:25.249683 | orchestrator | 2026-02-04 01:14:25.249689 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 01:14:25.249696 | orchestrator | Wednesday 04 February 2026 01:09:54 +0000 (0:00:07.946) 0:00:36.337 **** 2026-02-04 01:14:25.249702 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.249709 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.249715 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.249722 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.249728 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.249735 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.249742 | orchestrator | 2026-02-04 01:14:25.249748 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-04 01:14:25.249755 | orchestrator | Wednesday 04 February 2026 01:09:55 +0000 (0:00:00.913) 0:00:37.250 **** 2026-02-04 01:14:25.249761 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.249768 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.249774 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.249781 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.249788 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.249795 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.249801 | orchestrator | 2026-02-04 01:14:25.249808 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-04 01:14:25.249815 | orchestrator | Wednesday 04 February 2026 01:09:58 +0000 (0:00:02.257) 0:00:39.508 **** 2026-02-04 01:14:25.249821 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:14:25.249828 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:14:25.249835 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:14:25.249842 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:14:25.249848 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:14:25.249855 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:14:25.249861 | orchestrator | 2026-02-04 01:14:25.249868 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-04 01:14:25.249874 | orchestrator | Wednesday 04 February 2026 01:10:00 +0000 (0:00:02.200) 0:00:41.708 **** 2026-02-04 01:14:25.249881 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.249894 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.249901 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.249907 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.249914 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.249920 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.249927 | orchestrator | 2026-02-04 01:14:25.249933 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-04 01:14:25.249939 | orchestrator | Wednesday 04 February 2026 01:10:02 +0000 (0:00:02.475) 0:00:44.183 **** 2026-02-04 01:14:25.249948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.249965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.250048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.250063 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250070 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250084 | orchestrator | 2026-02-04 01:14:25.250091 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-04 01:14:25.250098 | orchestrator | Wednesday 04 February 2026 01:10:06 +0000 (0:00:03.632) 0:00:47.815 **** 2026-02-04 01:14:25.250109 | orchestrator | [WARNING]: Skipped 2026-02-04 01:14:25.250117 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-04 01:14:25.250124 | orchestrator | due to this access issue: 2026-02-04 01:14:25.250130 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-04 01:14:25.250137 | orchestrator | a directory 2026-02-04 01:14:25.250143 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:14:25.250150 | orchestrator | 2026-02-04 01:14:25.250156 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 01:14:25.250168 | orchestrator | Wednesday 04 February 2026 01:10:07 +0000 (0:00:01.455) 0:00:49.271 **** 2026-02-04 01:14:25.250175 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:14:25.250183 | orchestrator | 2026-02-04 01:14:25.250189 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-04 01:14:25.250195 | orchestrator | Wednesday 04 February 2026 01:10:09 +0000 (0:00:01.902) 0:00:51.173 **** 2026-02-04 01:14:25.250205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.250219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.250225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.250240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250249 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250256 | orchestrator | 2026-02-04 01:14:25.250262 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-04 01:14:25.250269 | orchestrator | Wednesday 04 February 2026 01:10:14 +0000 (0:00:05.086) 0:00:56.260 **** 2026-02-04 01:14:25.250276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.250283 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.250290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250301 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.250307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250314 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.250330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250336 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.250345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.250352 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.250359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.250366 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.250372 | orchestrator | 2026-02-04 01:14:25.250379 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-04 01:14:25.250385 | orchestrator | Wednesday 04 February 2026 01:10:19 +0000 (0:00:05.195) 0:01:01.456 **** 2026-02-04 01:14:25.250396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250402 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.250413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250420 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.250429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.250436 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.250442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.250449 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.250456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250470 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.250476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.250483 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.250489 | orchestrator | 2026-02-04 01:14:25.250495 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-04 01:14:25.250501 | orchestrator | Wednesday 04 February 2026 01:10:26 +0000 (0:00:06.122) 0:01:07.578 **** 2026-02-04 01:14:25.250508 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.250514 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.250520 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.250527 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.250534 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.250540 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.250546 | orchestrator | 2026-02-04 01:14:25.250553 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-04 01:14:25.250562 | orchestrator | Wednesday 04 February 2026 01:10:29 +0000 (0:00:03.660) 0:01:11.239 **** 2026-02-04 01:14:25.250569 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.250575 | orchestrator | 2026-02-04 01:14:25.250581 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-04 01:14:25.250587 | orchestrator | Wednesday 04 February 2026 01:10:29 +0000 (0:00:00.172) 0:01:11.411 **** 2026-02-04 01:14:25.250594 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.250600 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.250606 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.250613 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.250619 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.250625 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.250632 | orchestrator | 2026-02-04 01:14:25.250638 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-04 01:14:25.250644 | orchestrator | Wednesday 04 February 2026 01:10:30 +0000 (0:00:00.932) 0:01:12.344 **** 2026-02-04 01:14:25.250653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250663 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.250670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250677 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.250684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250691 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.250701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.250707 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.250717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.250723 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.250730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.250740 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.250746 | orchestrator | 2026-02-04 01:14:25.250765 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-04 01:14:25.250771 | orchestrator | Wednesday 04 February 2026 01:10:35 +0000 (0:00:04.346) 0:01:16.690 **** 2026-02-04 01:14:25.250777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.250784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.250804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.250815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250822 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250829 | orchestrator | 2026-02-04 01:14:25.250836 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-04 01:14:25.250842 | orchestrator | Wednesday 04 February 2026 01:10:41 +0000 (0:00:06.421) 0:01:23.112 **** 2026-02-04 01:14:25.250849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.250859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.250869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.250880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250886 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.250900 | orchestrator | 2026-02-04 01:14:25.250907 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-04 01:14:25.250913 | orchestrator | Wednesday 04 February 2026 01:10:48 +0000 (0:00:07.338) 0:01:30.450 **** 2026-02-04 01:14:25.250925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.250937 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.250946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250953 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.250959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250966 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.250973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.250990 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.250997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.251004 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.251026 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251033 | orchestrator | 2026-02-04 01:14:25.251042 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-04 01:14:25.251048 | orchestrator | Wednesday 04 February 2026 01:10:52 +0000 (0:00:03.264) 0:01:33.714 **** 2026-02-04 01:14:25.251054 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251060 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:14:25.251067 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251073 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251079 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:14:25.251085 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:14:25.251092 | orchestrator | 2026-02-04 01:14:25.251098 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-04 01:14:25.251105 | orchestrator | Wednesday 04 February 2026 01:10:56 +0000 (0:00:03.939) 0:01:37.654 **** 2026-02-04 01:14:25.251111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.251118 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.251131 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.251144 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.251169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.251177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.251184 | orchestrator | 2026-02-04 01:14:25.251190 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-04 01:14:25.251197 | orchestrator | Wednesday 04 February 2026 01:11:02 +0000 (0:00:06.217) 0:01:43.871 **** 2026-02-04 01:14:25.251203 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.251210 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.251216 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.251229 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251235 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251242 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251248 | orchestrator | 2026-02-04 01:14:25.251255 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-04 01:14:25.251262 | orchestrator | Wednesday 04 February 2026 01:11:05 +0000 (0:00:03.045) 0:01:46.917 **** 2026-02-04 01:14:25.251268 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.251274 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.251280 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251287 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251293 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251299 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.251305 | orchestrator | 2026-02-04 01:14:25.251311 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-04 01:14:25.251324 | orchestrator | Wednesday 04 February 2026 01:11:08 +0000 (0:00:02.853) 0:01:49.770 **** 2026-02-04 01:14:25.251331 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.251337 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.251344 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.251350 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251356 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251363 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251369 | orchestrator | 2026-02-04 01:14:25.251375 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-04 01:14:25.251381 | orchestrator | Wednesday 04 February 2026 01:11:11 +0000 (0:00:03.250) 0:01:53.021 **** 2026-02-04 01:14:25.251387 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.251394 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.251400 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.251406 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251413 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251419 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251426 | orchestrator | 2026-02-04 01:14:25.251432 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-04 01:14:25.251439 | orchestrator | Wednesday 04 February 2026 01:11:14 +0000 (0:00:02.683) 0:01:55.704 **** 2026-02-04 01:14:25.251445 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.251451 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.251458 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.251464 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251473 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251479 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251485 | orchestrator | 2026-02-04 01:14:25.251492 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-04 01:14:25.251498 | orchestrator | Wednesday 04 February 2026 01:11:18 +0000 (0:00:04.592) 0:02:00.296 **** 2026-02-04 01:14:25.251514 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.251520 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.251526 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.251533 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251539 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251546 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251552 | orchestrator | 2026-02-04 01:14:25.251558 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-04 01:14:25.251564 | orchestrator | Wednesday 04 February 2026 01:11:21 +0000 (0:00:03.087) 0:02:03.384 **** 2026-02-04 01:14:25.251571 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:14:25.251578 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.251584 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:14:25.251591 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.251597 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:14:25.251604 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.251610 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:14:25.251616 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251622 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:14:25.251628 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251635 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:14:25.251641 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251647 | orchestrator | 2026-02-04 01:14:25.251654 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-04 01:14:25.251660 | orchestrator | Wednesday 04 February 2026 01:11:24 +0000 (0:00:02.425) 0:02:05.809 **** 2026-02-04 01:14:25.251672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.251678 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.251708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.251715 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.251726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.251733 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.251742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.251749 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.251767 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.251779 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251786 | orchestrator | 2026-02-04 01:14:25.251792 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-04 01:14:25.251798 | orchestrator | Wednesday 04 February 2026 01:11:28 +0000 (0:00:04.197) 0:02:10.007 **** 2026-02-04 01:14:25.251805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.251811 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.251822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.251830 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.251839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.251851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.251857 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251863 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.251869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.251876 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.251889 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251895 | orchestrator | 2026-02-04 01:14:25.251901 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-04 01:14:25.251908 | orchestrator | Wednesday 04 February 2026 01:11:31 +0000 (0:00:03.389) 0:02:13.396 **** 2026-02-04 01:14:25.251915 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.251924 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.251931 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.251937 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.251943 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.251950 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.251956 | orchestrator | 2026-02-04 01:14:25.251962 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-04 01:14:25.251969 | orchestrator | Wednesday 04 February 2026 01:11:35 +0000 (0:00:03.552) 0:02:16.949 **** 2026-02-04 01:14:25.251975 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.252049 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.252056 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.252063 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:14:25.252069 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:14:25.252076 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:14:25.252082 | orchestrator | 2026-02-04 01:14:25.252089 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-04 01:14:25.252096 | orchestrator | Wednesday 04 February 2026 01:11:41 +0000 (0:00:05.915) 0:02:22.864 **** 2026-02-04 01:14:25.252103 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.252109 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.252120 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.252127 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.252133 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.252140 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.252146 | orchestrator | 2026-02-04 01:14:25.252153 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-04 01:14:25.252159 | orchestrator | Wednesday 04 February 2026 01:11:44 +0000 (0:00:03.597) 0:02:26.462 **** 2026-02-04 01:14:25.252166 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.252172 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.252179 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.252186 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.252193 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.252199 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.252205 | orchestrator | 2026-02-04 01:14:25.252212 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-04 01:14:25.252219 | orchestrator | Wednesday 04 February 2026 01:11:48 +0000 (0:00:03.857) 0:02:30.320 **** 2026-02-04 01:14:25.252225 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.252232 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.252238 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.252245 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.252251 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.252265 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.252277 | orchestrator | 2026-02-04 01:14:25.252283 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-04 01:14:25.252290 | orchestrator | Wednesday 04 February 2026 01:11:52 +0000 (0:00:03.654) 0:02:33.974 **** 2026-02-04 01:14:25.252296 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.252302 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.252308 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.252315 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.252321 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.252327 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.252333 | orchestrator | 2026-02-04 01:14:25.252340 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-04 01:14:25.252347 | orchestrator | Wednesday 04 February 2026 01:11:55 +0000 (0:00:02.685) 0:02:36.659 **** 2026-02-04 01:14:25.252353 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.252360 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.252366 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.252372 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.252379 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.252385 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.252391 | orchestrator | 2026-02-04 01:14:25.252398 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-04 01:14:25.252404 | orchestrator | Wednesday 04 February 2026 01:11:57 +0000 (0:00:02.513) 0:02:39.173 **** 2026-02-04 01:14:25.252411 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.252417 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.252424 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.252431 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.252442 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.252448 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.252454 | orchestrator | 2026-02-04 01:14:25.252461 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-04 01:14:25.252467 | orchestrator | Wednesday 04 February 2026 01:12:01 +0000 (0:00:03.596) 0:02:42.769 **** 2026-02-04 01:14:25.252473 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.252479 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.252485 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.252492 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.252498 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.252505 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.252511 | orchestrator | 2026-02-04 01:14:25.252518 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-04 01:14:25.252525 | orchestrator | Wednesday 04 February 2026 01:12:05 +0000 (0:00:04.287) 0:02:47.057 **** 2026-02-04 01:14:25.252531 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:14:25.252538 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.252544 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:14:25.252550 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.252557 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:14:25.252563 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.252569 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:14:25.252576 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.252586 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:14:25.252592 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.252598 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:14:25.252604 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.252610 | orchestrator | 2026-02-04 01:14:25.252617 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-04 01:14:25.252624 | orchestrator | Wednesday 04 February 2026 01:12:11 +0000 (0:00:05.476) 0:02:52.533 **** 2026-02-04 01:14:25.252635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.252642 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.252648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.252658 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.252664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:14:25.252671 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.252705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.252712 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.252730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.252736 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.252746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:14:25.252753 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.252759 | orchestrator | 2026-02-04 01:14:25.252765 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-04 01:14:25.252772 | orchestrator | Wednesday 04 February 2026 01:12:14 +0000 (0:00:03.494) 0:02:56.027 **** 2026-02-04 01:14:25.252783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.252790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.252800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:14:25.252810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.252818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.252830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:14:25.252836 | orchestrator | 2026-02-04 01:14:25.252842 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 01:14:25.252849 | orchestrator | Wednesday 04 February 2026 01:12:21 +0000 (0:00:07.142) 0:03:03.170 **** 2026-02-04 01:14:25.252856 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:14:25.252863 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:14:25.252869 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:14:25.252876 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:14:25.252882 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:14:25.252888 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:14:25.252895 | orchestrator | 2026-02-04 01:14:25.252901 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-04 01:14:25.252907 | orchestrator | Wednesday 04 February 2026 01:12:22 +0000 (0:00:00.886) 0:03:04.056 **** 2026-02-04 01:14:25.252913 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:14:25.252920 | orchestrator | 2026-02-04 01:14:25.252926 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-04 01:14:25.252932 | orchestrator | Wednesday 04 February 2026 01:12:24 +0000 (0:00:02.029) 0:03:06.086 **** 2026-02-04 01:14:25.252939 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:14:25.252945 | orchestrator | 2026-02-04 01:14:25.252950 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-04 01:14:25.252956 | orchestrator | Wednesday 04 February 2026 01:12:27 +0000 (0:00:02.798) 0:03:08.885 **** 2026-02-04 01:14:25.252962 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:14:25.252969 | orchestrator | 2026-02-04 01:14:25.252975 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:14:25.252994 | orchestrator | Wednesday 04 February 2026 01:13:09 +0000 (0:00:42.469) 0:03:51.354 **** 2026-02-04 01:14:25.253000 | orchestrator | 2026-02-04 01:14:25.253006 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:14:25.253012 | orchestrator | Wednesday 04 February 2026 01:13:10 +0000 (0:00:00.163) 0:03:51.518 **** 2026-02-04 01:14:25.253018 | orchestrator | 2026-02-04 01:14:25.253025 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:14:25.253031 | orchestrator | Wednesday 04 February 2026 01:13:10 +0000 (0:00:00.438) 0:03:51.957 **** 2026-02-04 01:14:25.253037 | orchestrator | 2026-02-04 01:14:25.253044 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:14:25.253050 | orchestrator | Wednesday 04 February 2026 01:13:10 +0000 (0:00:00.090) 0:03:52.047 **** 2026-02-04 01:14:25.253057 | orchestrator | 2026-02-04 01:14:25.253067 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:14:25.253074 | orchestrator | Wednesday 04 February 2026 01:13:10 +0000 (0:00:00.074) 0:03:52.122 **** 2026-02-04 01:14:25.253080 | orchestrator | 2026-02-04 01:14:25.253086 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:14:25.253093 | orchestrator | Wednesday 04 February 2026 01:13:10 +0000 (0:00:00.090) 0:03:52.212 **** 2026-02-04 01:14:25.253104 | orchestrator | 2026-02-04 01:14:25.253110 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-04 01:14:25.253117 | orchestrator | Wednesday 04 February 2026 01:13:10 +0000 (0:00:00.135) 0:03:52.347 **** 2026-02-04 01:14:25.253123 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:14:25.253129 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:14:25.253136 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:14:25.253142 | orchestrator | 2026-02-04 01:14:25.253149 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-04 01:14:25.253155 | orchestrator | Wednesday 04 February 2026 01:13:36 +0000 (0:00:25.979) 0:04:18.327 **** 2026-02-04 01:14:25.253161 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:14:25.253167 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:14:25.253177 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:14:25.253184 | orchestrator | 2026-02-04 01:14:25.253190 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:14:25.253196 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 01:14:25.253203 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-04 01:14:25.253209 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-04 01:14:25.253216 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 01:14:25.253222 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 01:14:25.253229 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 01:14:25.253235 | orchestrator | 2026-02-04 01:14:25.253242 | orchestrator | 2026-02-04 01:14:25.253248 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:14:25.253254 | orchestrator | Wednesday 04 February 2026 01:14:21 +0000 (0:00:44.670) 0:05:02.997 **** 2026-02-04 01:14:25.253261 | orchestrator | =============================================================================== 2026-02-04 01:14:25.253267 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 44.67s 2026-02-04 01:14:25.253273 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.47s 2026-02-04 01:14:25.253279 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.98s 2026-02-04 01:14:25.253286 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.95s 2026-02-04 01:14:25.253292 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.34s 2026-02-04 01:14:25.253298 | orchestrator | neutron : Check neutron containers -------------------------------------- 7.14s 2026-02-04 01:14:25.253305 | orchestrator | neutron : Copying over config.json files for services ------------------- 6.42s 2026-02-04 01:14:25.253311 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 6.22s 2026-02-04 01:14:25.253318 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 6.12s 2026-02-04 01:14:25.253324 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.92s 2026-02-04 01:14:25.253330 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.79s 2026-02-04 01:14:25.253336 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 5.48s 2026-02-04 01:14:25.253342 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.20s 2026-02-04 01:14:25.253349 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.09s 2026-02-04 01:14:25.253359 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.59s 2026-02-04 01:14:25.253366 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.35s 2026-02-04 01:14:25.253372 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 4.29s 2026-02-04 01:14:25.253378 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 4.20s 2026-02-04 01:14:25.253385 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.94s 2026-02-04 01:14:25.253391 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.86s 2026-02-04 01:14:25.253398 | orchestrator | 2026-02-04 01:14:25 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:25.253498 | orchestrator | 2026-02-04 01:14:25 | INFO  | Task 1660dcde-76b6-4fd0-a57f-036f36e0ebab is in state STARTED 2026-02-04 01:14:25.253508 | orchestrator | 2026-02-04 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:28.288762 | orchestrator | 2026-02-04 01:14:28 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:28.292615 | orchestrator | 2026-02-04 01:14:28 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:28.292910 | orchestrator | 2026-02-04 01:14:28 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:28.293942 | orchestrator | 2026-02-04 01:14:28 | INFO  | Task 1660dcde-76b6-4fd0-a57f-036f36e0ebab is in state STARTED 2026-02-04 01:14:28.293973 | orchestrator | 2026-02-04 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:31.355026 | orchestrator | 2026-02-04 01:14:31 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:31.357284 | orchestrator | 2026-02-04 01:14:31 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:31.359630 | orchestrator | 2026-02-04 01:14:31 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:31.360498 | orchestrator | 2026-02-04 01:14:31 | INFO  | Task 1660dcde-76b6-4fd0-a57f-036f36e0ebab is in state SUCCESS 2026-02-04 01:14:31.360708 | orchestrator | 2026-02-04 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:34.408371 | orchestrator | 2026-02-04 01:14:34 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:34.409768 | orchestrator | 2026-02-04 01:14:34 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:34.411377 | orchestrator | 2026-02-04 01:14:34 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:14:34.415067 | orchestrator | 2026-02-04 01:14:34 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:34.415121 | orchestrator | 2026-02-04 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:37.453585 | orchestrator | 2026-02-04 01:14:37 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:37.455600 | orchestrator | 2026-02-04 01:14:37 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:37.458527 | orchestrator | 2026-02-04 01:14:37 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:14:37.459704 | orchestrator | 2026-02-04 01:14:37 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:37.459743 | orchestrator | 2026-02-04 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:40.498606 | orchestrator | 2026-02-04 01:14:40 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:40.499683 | orchestrator | 2026-02-04 01:14:40 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:40.501595 | orchestrator | 2026-02-04 01:14:40 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:14:40.503205 | orchestrator | 2026-02-04 01:14:40 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:40.503369 | orchestrator | 2026-02-04 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:43.537298 | orchestrator | 2026-02-04 01:14:43 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:43.538353 | orchestrator | 2026-02-04 01:14:43 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:43.539487 | orchestrator | 2026-02-04 01:14:43 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:14:43.540440 | orchestrator | 2026-02-04 01:14:43 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:43.540474 | orchestrator | 2026-02-04 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:46.582657 | orchestrator | 2026-02-04 01:14:46 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:46.583381 | orchestrator | 2026-02-04 01:14:46 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:46.584061 | orchestrator | 2026-02-04 01:14:46 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:14:46.586069 | orchestrator | 2026-02-04 01:14:46 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:46.586087 | orchestrator | 2026-02-04 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:49.628353 | orchestrator | 2026-02-04 01:14:49 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:49.629474 | orchestrator | 2026-02-04 01:14:49 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:49.631274 | orchestrator | 2026-02-04 01:14:49 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:14:49.632709 | orchestrator | 2026-02-04 01:14:49 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:49.632757 | orchestrator | 2026-02-04 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:52.677636 | orchestrator | 2026-02-04 01:14:52 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:52.680136 | orchestrator | 2026-02-04 01:14:52 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:52.682655 | orchestrator | 2026-02-04 01:14:52 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:14:52.685196 | orchestrator | 2026-02-04 01:14:52 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:52.685249 | orchestrator | 2026-02-04 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:55.720961 | orchestrator | 2026-02-04 01:14:55 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:55.722272 | orchestrator | 2026-02-04 01:14:55 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:55.724394 | orchestrator | 2026-02-04 01:14:55 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:14:55.728444 | orchestrator | 2026-02-04 01:14:55 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:55.728555 | orchestrator | 2026-02-04 01:14:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:14:58.778742 | orchestrator | 2026-02-04 01:14:58 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:14:58.779679 | orchestrator | 2026-02-04 01:14:58 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state STARTED 2026-02-04 01:14:58.780898 | orchestrator | 2026-02-04 01:14:58 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:14:58.783306 | orchestrator | 2026-02-04 01:14:58 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:14:58.783333 | orchestrator | 2026-02-04 01:14:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:01.848956 | orchestrator | 2026-02-04 01:15:01 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:01.851412 | orchestrator | 2026-02-04 01:15:01 | INFO  | Task a8192661-ad96-443d-826a-8af14f9390e0 is in state SUCCESS 2026-02-04 01:15:01.852991 | orchestrator | 2026-02-04 01:15:01.853043 | orchestrator | 2026-02-04 01:15:01.853060 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:15:01.853082 | orchestrator | 2026-02-04 01:15:01.853090 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:15:01.853100 | orchestrator | Wednesday 04 February 2026 01:14:28 +0000 (0:00:00.171) 0:00:00.171 **** 2026-02-04 01:15:01.853107 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:15:01.853115 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:15:01.853122 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:15:01.853128 | orchestrator | 2026-02-04 01:15:01.853134 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:15:01.853141 | orchestrator | Wednesday 04 February 2026 01:14:29 +0000 (0:00:00.297) 0:00:00.469 **** 2026-02-04 01:15:01.853147 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-02-04 01:15:01.853154 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-02-04 01:15:01.853161 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-02-04 01:15:01.853167 | orchestrator | 2026-02-04 01:15:01.853174 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-02-04 01:15:01.853181 | orchestrator | 2026-02-04 01:15:01.853188 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-02-04 01:15:01.853196 | orchestrator | Wednesday 04 February 2026 01:14:29 +0000 (0:00:00.650) 0:00:01.119 **** 2026-02-04 01:15:01.853201 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:15:01.853205 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:15:01.853209 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:15:01.853213 | orchestrator | 2026-02-04 01:15:01.853217 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:15:01.853222 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:15:01.853228 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:15:01.853232 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:15:01.853236 | orchestrator | 2026-02-04 01:15:01.853240 | orchestrator | 2026-02-04 01:15:01.853244 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:15:01.853248 | orchestrator | Wednesday 04 February 2026 01:14:30 +0000 (0:00:00.853) 0:00:01.973 **** 2026-02-04 01:15:01.853252 | orchestrator | =============================================================================== 2026-02-04 01:15:01.853256 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.85s 2026-02-04 01:15:01.853259 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-02-04 01:15:01.853277 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-02-04 01:15:01.853281 | orchestrator | 2026-02-04 01:15:01.853285 | orchestrator | 2026-02-04 01:15:01.853289 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:15:01.853293 | orchestrator | 2026-02-04 01:15:01.853297 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:15:01.853300 | orchestrator | Wednesday 04 February 2026 01:13:09 +0000 (0:00:00.444) 0:00:00.444 **** 2026-02-04 01:15:01.853304 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:15:01.853308 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:15:01.853312 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:15:01.853316 | orchestrator | 2026-02-04 01:15:01.853320 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:15:01.853329 | orchestrator | Wednesday 04 February 2026 01:13:10 +0000 (0:00:00.619) 0:00:01.064 **** 2026-02-04 01:15:01.853333 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-04 01:15:01.853337 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-04 01:15:01.853341 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-04 01:15:01.853345 | orchestrator | 2026-02-04 01:15:01.853349 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-04 01:15:01.853352 | orchestrator | 2026-02-04 01:15:01.853356 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-04 01:15:01.853360 | orchestrator | Wednesday 04 February 2026 01:13:11 +0000 (0:00:00.598) 0:00:01.662 **** 2026-02-04 01:15:01.853364 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:15:01.853368 | orchestrator | 2026-02-04 01:15:01.853372 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-04 01:15:01.853375 | orchestrator | Wednesday 04 February 2026 01:13:12 +0000 (0:00:01.056) 0:00:02.719 **** 2026-02-04 01:15:01.853379 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-04 01:15:01.853383 | orchestrator | 2026-02-04 01:15:01.853387 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-04 01:15:01.853391 | orchestrator | Wednesday 04 February 2026 01:13:15 +0000 (0:00:03.702) 0:00:06.422 **** 2026-02-04 01:15:01.853394 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-04 01:15:01.853398 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-04 01:15:01.853402 | orchestrator | 2026-02-04 01:15:01.853406 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-04 01:15:01.853410 | orchestrator | Wednesday 04 February 2026 01:13:23 +0000 (0:00:07.515) 0:00:13.937 **** 2026-02-04 01:15:01.853414 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:15:01.853418 | orchestrator | 2026-02-04 01:15:01.853421 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-04 01:15:01.853425 | orchestrator | Wednesday 04 February 2026 01:13:27 +0000 (0:00:03.820) 0:00:17.758 **** 2026-02-04 01:15:01.853437 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-04 01:15:01.853442 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:15:01.853445 | orchestrator | 2026-02-04 01:15:01.853449 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-04 01:15:01.853453 | orchestrator | Wednesday 04 February 2026 01:13:31 +0000 (0:00:04.196) 0:00:21.954 **** 2026-02-04 01:15:01.853457 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:15:01.853461 | orchestrator | 2026-02-04 01:15:01.853465 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-04 01:15:01.853469 | orchestrator | Wednesday 04 February 2026 01:13:34 +0000 (0:00:03.373) 0:00:25.328 **** 2026-02-04 01:15:01.853472 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-04 01:15:01.853479 | orchestrator | 2026-02-04 01:15:01.853483 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-04 01:15:01.853487 | orchestrator | Wednesday 04 February 2026 01:13:38 +0000 (0:00:03.316) 0:00:28.644 **** 2026-02-04 01:15:01.853491 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:15:01.853494 | orchestrator | 2026-02-04 01:15:01.853498 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-04 01:15:01.853502 | orchestrator | Wednesday 04 February 2026 01:13:40 +0000 (0:00:02.846) 0:00:31.491 **** 2026-02-04 01:15:01.853506 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:15:01.853510 | orchestrator | 2026-02-04 01:15:01.853514 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-04 01:15:01.853517 | orchestrator | Wednesday 04 February 2026 01:13:44 +0000 (0:00:03.742) 0:00:35.234 **** 2026-02-04 01:15:01.853521 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:15:01.853525 | orchestrator | 2026-02-04 01:15:01.853529 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-04 01:15:01.853533 | orchestrator | Wednesday 04 February 2026 01:13:48 +0000 (0:00:03.624) 0:00:38.858 **** 2026-02-04 01:15:01.853538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.853546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.853551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.853558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.853566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.853570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.853575 | orchestrator | 2026-02-04 01:15:01.853579 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-04 01:15:01.853583 | orchestrator | Wednesday 04 February 2026 01:13:50 +0000 (0:00:02.210) 0:00:41.068 **** 2026-02-04 01:15:01.853586 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:01.853591 | orchestrator | 2026-02-04 01:15:01.853597 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-04 01:15:01.853606 | orchestrator | Wednesday 04 February 2026 01:13:50 +0000 (0:00:00.128) 0:00:41.196 **** 2026-02-04 01:15:01.853612 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:01.853619 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:01.853635 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:01.853642 | orchestrator | 2026-02-04 01:15:01.853659 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-04 01:15:01.853666 | orchestrator | Wednesday 04 February 2026 01:13:51 +0000 (0:00:00.567) 0:00:41.764 **** 2026-02-04 01:15:01.853671 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:15:01.853675 | orchestrator | 2026-02-04 01:15:01.853679 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-04 01:15:01.853683 | orchestrator | Wednesday 04 February 2026 01:13:52 +0000 (0:00:01.053) 0:00:42.817 **** 2026-02-04 01:15:01.853687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.853715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.853720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.853725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.853734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.853745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.853756 | orchestrator | 2026-02-04 01:15:01.853762 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-04 01:15:01.853769 | orchestrator | Wednesday 04 February 2026 01:13:55 +0000 (0:00:02.832) 0:00:45.649 **** 2026-02-04 01:15:01.853775 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:15:01.853782 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:15:01.853788 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:15:01.853795 | orchestrator | 2026-02-04 01:15:01.853799 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-04 01:15:01.853806 | orchestrator | Wednesday 04 February 2026 01:13:55 +0000 (0:00:00.352) 0:00:46.002 **** 2026-02-04 01:15:01.853811 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:15:01.853815 | orchestrator | 2026-02-04 01:15:01.853819 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-04 01:15:01.853822 | orchestrator | Wednesday 04 February 2026 01:13:56 +0000 (0:00:01.039) 0:00:47.042 **** 2026-02-04 01:15:01.853826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.853831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.853837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.853844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.853851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.853856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.853860 | orchestrator | 2026-02-04 01:15:01.853863 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-04 01:15:01.853867 | orchestrator | Wednesday 04 February 2026 01:13:58 +0000 (0:00:02.296) 0:00:49.338 **** 2026-02-04 01:15:01.853871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:15:01.853877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:15:01.853884 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:01.853888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:15:01.853895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:15:01.853899 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:01.853903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:15:01.853907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:15:01.853911 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:01.853915 | orchestrator | 2026-02-04 01:15:01.853919 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-04 01:15:01.853923 | orchestrator | Wednesday 04 February 2026 01:13:59 +0000 (0:00:00.691) 0:00:50.030 **** 2026-02-04 01:15:01.853933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:15:01.853938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:15:01.853942 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:01.853949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:15:01.853953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:15:01.853957 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:01.853981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:15:01.853990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:15:01.853994 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:01.853998 | orchestrator | 2026-02-04 01:15:01.854002 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-04 01:15:01.854006 | orchestrator | Wednesday 04 February 2026 01:14:00 +0000 (0:00:01.378) 0:00:51.409 **** 2026-02-04 01:15:01.854174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.854183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.854187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.854198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.854202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.854209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.854213 | orchestrator | 2026-02-04 01:15:01.854217 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-04 01:15:01.854221 | orchestrator | Wednesday 04 February 2026 01:14:03 +0000 (0:00:02.429) 0:00:53.839 **** 2026-02-04 01:15:01.854225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.854230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.854246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.854251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.854257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.854262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.854266 | orchestrator | 2026-02-04 01:15:01.854270 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-04 01:15:01.854274 | orchestrator | Wednesday 04 February 2026 01:14:09 +0000 (0:00:06.152) 0:00:59.991 **** 2026-02-04 01:15:01.854278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:15:01.854286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:15:01.854291 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:01.854295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:15:01.854301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:15:01.854306 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:01.854310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:15:01.854316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:15:01.854320 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:01.854324 | orchestrator | 2026-02-04 01:15:01.854328 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-04 01:15:01.854343 | orchestrator | Wednesday 04 February 2026 01:14:10 +0000 (0:00:00.675) 0:01:00.666 **** 2026-02-04 01:15:01.854348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.854355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.854359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:15:01.854366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.854372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.854376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:15:01.854380 | orchestrator | 2026-02-04 01:15:01.854384 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-04 01:15:01.854388 | orchestrator | Wednesday 04 February 2026 01:14:12 +0000 (0:00:02.115) 0:01:02.782 **** 2026-02-04 01:15:01.854392 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:01.854395 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:01.854399 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:01.854403 | orchestrator | 2026-02-04 01:15:01.854407 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-04 01:15:01.854411 | orchestrator | Wednesday 04 February 2026 01:14:12 +0000 (0:00:00.286) 0:01:03.068 **** 2026-02-04 01:15:01.854415 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:15:01.854419 | orchestrator | 2026-02-04 01:15:01.854423 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-04 01:15:01.854426 | orchestrator | Wednesday 04 February 2026 01:14:14 +0000 (0:00:02.097) 0:01:05.166 **** 2026-02-04 01:15:01.854430 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:15:01.854434 | orchestrator | 2026-02-04 01:15:01.854438 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-04 01:15:01.854442 | orchestrator | Wednesday 04 February 2026 01:14:16 +0000 (0:00:02.162) 0:01:07.329 **** 2026-02-04 01:15:01.854450 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:15:01.854456 | orchestrator | 2026-02-04 01:15:01.854461 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-04 01:15:01.854466 | orchestrator | Wednesday 04 February 2026 01:14:32 +0000 (0:00:15.606) 0:01:22.935 **** 2026-02-04 01:15:01.854472 | orchestrator | 2026-02-04 01:15:01.854480 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-04 01:15:01.854493 | orchestrator | Wednesday 04 February 2026 01:14:32 +0000 (0:00:00.166) 0:01:23.102 **** 2026-02-04 01:15:01.854500 | orchestrator | 2026-02-04 01:15:01.854506 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-04 01:15:01.854512 | orchestrator | Wednesday 04 February 2026 01:14:32 +0000 (0:00:00.157) 0:01:23.259 **** 2026-02-04 01:15:01.854519 | orchestrator | 2026-02-04 01:15:01.854524 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-04 01:15:01.854531 | orchestrator | Wednesday 04 February 2026 01:14:32 +0000 (0:00:00.287) 0:01:23.546 **** 2026-02-04 01:15:01.854538 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:15:01.854544 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:15:01.854550 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:15:01.854556 | orchestrator | 2026-02-04 01:15:01.854563 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-04 01:15:01.854569 | orchestrator | Wednesday 04 February 2026 01:14:46 +0000 (0:00:13.235) 0:01:36.782 **** 2026-02-04 01:15:01.854576 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:15:01.854582 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:15:01.854588 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:15:01.854595 | orchestrator | 2026-02-04 01:15:01.854601 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:15:01.854608 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:15:01.854616 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:15:01.854623 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:15:01.854630 | orchestrator | 2026-02-04 01:15:01.854637 | orchestrator | 2026-02-04 01:15:01.854644 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:15:01.854651 | orchestrator | Wednesday 04 February 2026 01:14:59 +0000 (0:00:13.636) 0:01:50.419 **** 2026-02-04 01:15:01.854659 | orchestrator | =============================================================================== 2026-02-04 01:15:01.854666 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.61s 2026-02-04 01:15:01.854673 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 13.64s 2026-02-04 01:15:01.854680 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.24s 2026-02-04 01:15:01.854686 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.52s 2026-02-04 01:15:01.854693 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.15s 2026-02-04 01:15:01.854700 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.20s 2026-02-04 01:15:01.854707 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.82s 2026-02-04 01:15:01.854718 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.74s 2026-02-04 01:15:01.854725 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.70s 2026-02-04 01:15:01.854732 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.62s 2026-02-04 01:15:01.854738 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.37s 2026-02-04 01:15:01.854745 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.32s 2026-02-04 01:15:01.854752 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.85s 2026-02-04 01:15:01.854759 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.83s 2026-02-04 01:15:01.854766 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.43s 2026-02-04 01:15:01.854772 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.30s 2026-02-04 01:15:01.854785 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.21s 2026-02-04 01:15:01.854793 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.16s 2026-02-04 01:15:01.854799 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.12s 2026-02-04 01:15:01.854806 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.10s 2026-02-04 01:15:01.854858 | orchestrator | 2026-02-04 01:15:01 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:01.856464 | orchestrator | 2026-02-04 01:15:01 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:01.856506 | orchestrator | 2026-02-04 01:15:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:04.901034 | orchestrator | 2026-02-04 01:15:04 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:04.905534 | orchestrator | 2026-02-04 01:15:04 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:04.908673 | orchestrator | 2026-02-04 01:15:04 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:04.908727 | orchestrator | 2026-02-04 01:15:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:07.953370 | orchestrator | 2026-02-04 01:15:07 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:07.955326 | orchestrator | 2026-02-04 01:15:07 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:07.960084 | orchestrator | 2026-02-04 01:15:07 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:07.960430 | orchestrator | 2026-02-04 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:10.994231 | orchestrator | 2026-02-04 01:15:10 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:10.997232 | orchestrator | 2026-02-04 01:15:10 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:10.999716 | orchestrator | 2026-02-04 01:15:11 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:10.999891 | orchestrator | 2026-02-04 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:14.047217 | orchestrator | 2026-02-04 01:15:14 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:14.048912 | orchestrator | 2026-02-04 01:15:14 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:14.050442 | orchestrator | 2026-02-04 01:15:14 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:14.050514 | orchestrator | 2026-02-04 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:17.092016 | orchestrator | 2026-02-04 01:15:17 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:17.093291 | orchestrator | 2026-02-04 01:15:17 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:17.095154 | orchestrator | 2026-02-04 01:15:17 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:17.095187 | orchestrator | 2026-02-04 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:20.277569 | orchestrator | 2026-02-04 01:15:20 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:20.278314 | orchestrator | 2026-02-04 01:15:20 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:20.279597 | orchestrator | 2026-02-04 01:15:20 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:20.279753 | orchestrator | 2026-02-04 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:23.316106 | orchestrator | 2026-02-04 01:15:23 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:23.319072 | orchestrator | 2026-02-04 01:15:23 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:23.320480 | orchestrator | 2026-02-04 01:15:23 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:23.320518 | orchestrator | 2026-02-04 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:26.370118 | orchestrator | 2026-02-04 01:15:26 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:26.371817 | orchestrator | 2026-02-04 01:15:26 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:26.373906 | orchestrator | 2026-02-04 01:15:26 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:26.373945 | orchestrator | 2026-02-04 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:29.422984 | orchestrator | 2026-02-04 01:15:29 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:29.424288 | orchestrator | 2026-02-04 01:15:29 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:29.426461 | orchestrator | 2026-02-04 01:15:29 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:29.426539 | orchestrator | 2026-02-04 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:32.469947 | orchestrator | 2026-02-04 01:15:32 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:32.471441 | orchestrator | 2026-02-04 01:15:32 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:32.472836 | orchestrator | 2026-02-04 01:15:32 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:32.472924 | orchestrator | 2026-02-04 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:35.524144 | orchestrator | 2026-02-04 01:15:35 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:35.525415 | orchestrator | 2026-02-04 01:15:35 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:35.526972 | orchestrator | 2026-02-04 01:15:35 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:35.527027 | orchestrator | 2026-02-04 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:38.570201 | orchestrator | 2026-02-04 01:15:38 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:38.572418 | orchestrator | 2026-02-04 01:15:38 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:38.574194 | orchestrator | 2026-02-04 01:15:38 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:38.574226 | orchestrator | 2026-02-04 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:41.611465 | orchestrator | 2026-02-04 01:15:41 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:41.612412 | orchestrator | 2026-02-04 01:15:41 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:41.613570 | orchestrator | 2026-02-04 01:15:41 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:41.613619 | orchestrator | 2026-02-04 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:44.696328 | orchestrator | 2026-02-04 01:15:44 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:44.696450 | orchestrator | 2026-02-04 01:15:44 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:44.696459 | orchestrator | 2026-02-04 01:15:44 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:44.696467 | orchestrator | 2026-02-04 01:15:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:47.705899 | orchestrator | 2026-02-04 01:15:47 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:47.707229 | orchestrator | 2026-02-04 01:15:47 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:47.709011 | orchestrator | 2026-02-04 01:15:47 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:47.709079 | orchestrator | 2026-02-04 01:15:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:50.760471 | orchestrator | 2026-02-04 01:15:50 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:50.761150 | orchestrator | 2026-02-04 01:15:50 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:50.762408 | orchestrator | 2026-02-04 01:15:50 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:50.762459 | orchestrator | 2026-02-04 01:15:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:53.820216 | orchestrator | 2026-02-04 01:15:53 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:53.821075 | orchestrator | 2026-02-04 01:15:53 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:53.821885 | orchestrator | 2026-02-04 01:15:53 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state STARTED 2026-02-04 01:15:53.821910 | orchestrator | 2026-02-04 01:15:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:56.857634 | orchestrator | 2026-02-04 01:15:56 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:56.858408 | orchestrator | 2026-02-04 01:15:56 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:56.865471 | orchestrator | 2026-02-04 01:15:56.865521 | orchestrator | 2026-02-04 01:15:56.865529 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:15:56.865535 | orchestrator | 2026-02-04 01:15:56.865541 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:15:56.865546 | orchestrator | Wednesday 04 February 2026 01:13:58 +0000 (0:00:00.271) 0:00:00.271 **** 2026-02-04 01:15:56.865552 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:15:56.865559 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:15:56.865564 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:15:56.865569 | orchestrator | 2026-02-04 01:15:56.865573 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:15:56.865576 | orchestrator | Wednesday 04 February 2026 01:13:58 +0000 (0:00:00.332) 0:00:00.604 **** 2026-02-04 01:15:56.865580 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-04 01:15:56.865583 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-04 01:15:56.865586 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-04 01:15:56.865590 | orchestrator | 2026-02-04 01:15:56.865593 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-04 01:15:56.865596 | orchestrator | 2026-02-04 01:15:56.865599 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-04 01:15:56.865614 | orchestrator | Wednesday 04 February 2026 01:13:59 +0000 (0:00:00.457) 0:00:01.061 **** 2026-02-04 01:15:56.865618 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:15:56.865621 | orchestrator | 2026-02-04 01:15:56.865625 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-04 01:15:56.865628 | orchestrator | Wednesday 04 February 2026 01:13:59 +0000 (0:00:00.615) 0:00:01.677 **** 2026-02-04 01:15:56.865632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865650 | orchestrator | 2026-02-04 01:15:56.865653 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-04 01:15:56.865656 | orchestrator | Wednesday 04 February 2026 01:14:00 +0000 (0:00:00.817) 0:00:02.494 **** 2026-02-04 01:15:56.865659 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-04 01:15:56.865663 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-04 01:15:56.865666 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:15:56.865670 | orchestrator | 2026-02-04 01:15:56.865673 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-04 01:15:56.865676 | orchestrator | Wednesday 04 February 2026 01:14:01 +0000 (0:00:00.997) 0:00:03.492 **** 2026-02-04 01:15:56.865679 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:15:56.865682 | orchestrator | 2026-02-04 01:15:56.865686 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-04 01:15:56.865689 | orchestrator | Wednesday 04 February 2026 01:14:02 +0000 (0:00:00.798) 0:00:04.291 **** 2026-02-04 01:15:56.865700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865714 | orchestrator | 2026-02-04 01:15:56.865717 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-04 01:15:56.865720 | orchestrator | Wednesday 04 February 2026 01:14:03 +0000 (0:00:01.357) 0:00:05.648 **** 2026-02-04 01:15:56.865723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:15:56.865728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:15:56.865732 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:56.865735 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:56.865741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:15:56.865747 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:56.865750 | orchestrator | 2026-02-04 01:15:56.865773 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-04 01:15:56.865777 | orchestrator | Wednesday 04 February 2026 01:14:04 +0000 (0:00:00.682) 0:00:06.330 **** 2026-02-04 01:15:56.865780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:15:56.865786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:15:56.865817 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:56.865824 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:56.865830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:15:56.865835 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:56.865841 | orchestrator | 2026-02-04 01:15:56.865846 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-04 01:15:56.865851 | orchestrator | Wednesday 04 February 2026 01:14:05 +0000 (0:00:01.403) 0:00:07.734 **** 2026-02-04 01:15:56.865860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865888 | orchestrator | 2026-02-04 01:15:56.865893 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-04 01:15:56.865899 | orchestrator | Wednesday 04 February 2026 01:14:07 +0000 (0:00:01.368) 0:00:09.103 **** 2026-02-04 01:15:56.865904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.865917 | orchestrator | 2026-02-04 01:15:56.865920 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-04 01:15:56.865924 | orchestrator | Wednesday 04 February 2026 01:14:08 +0000 (0:00:01.625) 0:00:10.729 **** 2026-02-04 01:15:56.865927 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:56.865930 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:56.865933 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:56.866050 | orchestrator | 2026-02-04 01:15:56.866058 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-04 01:15:56.866062 | orchestrator | Wednesday 04 February 2026 01:14:09 +0000 (0:00:00.531) 0:00:11.260 **** 2026-02-04 01:15:56.866066 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-04 01:15:56.866073 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-04 01:15:56.866076 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-04 01:15:56.866080 | orchestrator | 2026-02-04 01:15:56.866084 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-04 01:15:56.866088 | orchestrator | Wednesday 04 February 2026 01:14:10 +0000 (0:00:01.146) 0:00:12.407 **** 2026-02-04 01:15:56.866092 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-04 01:15:56.866096 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-04 01:15:56.866099 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-04 01:15:56.866103 | orchestrator | 2026-02-04 01:15:56.866107 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-04 01:15:56.866110 | orchestrator | Wednesday 04 February 2026 01:14:11 +0000 (0:00:01.213) 0:00:13.621 **** 2026-02-04 01:15:56.866118 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:15:56.866122 | orchestrator | 2026-02-04 01:15:56.866125 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-04 01:15:56.866129 | orchestrator | Wednesday 04 February 2026 01:14:12 +0000 (0:00:00.811) 0:00:14.432 **** 2026-02-04 01:15:56.866133 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-04 01:15:56.866137 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-04 01:15:56.866140 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:15:56.866144 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:15:56.866148 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:15:56.866151 | orchestrator | 2026-02-04 01:15:56.866155 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-04 01:15:56.866159 | orchestrator | Wednesday 04 February 2026 01:14:13 +0000 (0:00:00.657) 0:00:15.089 **** 2026-02-04 01:15:56.866163 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:56.866166 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:56.866170 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:56.866174 | orchestrator | 2026-02-04 01:15:56.866177 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-04 01:15:56.866181 | orchestrator | Wednesday 04 February 2026 01:14:13 +0000 (0:00:00.484) 0:00:15.574 **** 2026-02-04 01:15:56.866185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1321283, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9081757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1321283, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9081757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1321283, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9081757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1321335, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9185486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1321335, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9185486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1321335, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9185486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1321292, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9101548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1321292, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9101548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1321292, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9101548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1321339, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9203026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1321339, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9203026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1321339, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9203026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1321312, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9133775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1321312, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9133775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1321312, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9133775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1321327, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9172618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1321327, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9172618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1321327, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9172618, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1321281, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.907102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1321281, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.907102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1321281, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.907102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1321289, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9084857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1321289, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9084857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1321289, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9084857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1321297, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.910159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1321297, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.910159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1321297, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.910159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1321317, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9150038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1321317, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9150038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1321317, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9150038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1321334, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9176624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1321334, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9176624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1321334, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9176624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1321291, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9088585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1321291, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9088585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1321291, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9088585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1321324, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9164252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1321324, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9164252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1321324, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9164252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1321314, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9139767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1321314, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9139767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1321314, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9139767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1321306, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9131908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1321306, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9131908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1321306, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9131908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1321303, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9111593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1321303, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9111593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1321303, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9111593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1321320, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9154465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1321320, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9154465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1321320, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9154465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1321300, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.910992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1321300, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.910992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1321300, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.910992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1321331, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9176624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1321331, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9176624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1321331, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9176624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1321523, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.952998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1321523, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.952998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1321523, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.952998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1321400, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9309561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1321400, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9309561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1321372, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9241593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1321400, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9309561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1321372, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9241593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1321440, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9335413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1321372, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9241593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1321440, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9335413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1321356, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.920964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1321440, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9335413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1321356, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.920964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1321486, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9431593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1321356, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.920964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1321486, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9431593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1321446, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.941392026-02-04 01:15:56 | INFO  | Task 1eda9427-1779-4b4f-8663-35784540aa78 is in state SUCCESS 2026-02-04 01:15:56.866700 | orchestrator | 2026-02-04 01:15:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:15:56.866908 | orchestrator | 05, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1321446, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9413905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1321486, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9431593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1321491, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9441361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1321491, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9441361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1321446, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9413905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.866990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1321519, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9521594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1321519, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9521594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1321491, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9441361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1321483, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9421594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1321483, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9421594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1321519, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9521594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1321430, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9320838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1321430, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9320838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1321483, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9421594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1321394, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9271593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1321394, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9271593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1321430, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9320838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1321425, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.931489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1321425, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.931489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1321394, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9271593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1321378, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9266152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1321378, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9266152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1321425, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.931489, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1321435, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.932359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1321435, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.932359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1321378, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9266152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1321511, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9511595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1321511, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9511595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1321435, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.932359, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1321500, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9471595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1321500, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9471595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1321511, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9511595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1321358, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9211593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1321358, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9211593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1321500, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9471595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1321364, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.923009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1321364, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.923009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1321358, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9211593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1321479, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9417024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1321479, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9417024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1321364, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.923009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1321494, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9447982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1321494, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9447982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1321479, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9417024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1321494, 'dev': 120, 'nlink': 1, 'atime': 1770163339.0, 'mtime': 1770163339.0, 'ctime': 1770164327.9447982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:15:56.867261 | orchestrator | 2026-02-04 01:15:56.867266 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-04 01:15:56.867271 | orchestrator | Wednesday 04 February 2026 01:14:50 +0000 (0:00:37.240) 0:00:52.815 **** 2026-02-04 01:15:56.867277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.867283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.867291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:15:56.867296 | orchestrator | 2026-02-04 01:15:56.867301 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-04 01:15:56.867307 | orchestrator | Wednesday 04 February 2026 01:14:51 +0000 (0:00:01.028) 0:00:53.844 **** 2026-02-04 01:15:56.867312 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:15:56.867318 | orchestrator | 2026-02-04 01:15:56.867323 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-04 01:15:56.867329 | orchestrator | Wednesday 04 February 2026 01:14:54 +0000 (0:00:02.215) 0:00:56.059 **** 2026-02-04 01:15:56.867334 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:15:56.867339 | orchestrator | 2026-02-04 01:15:56.867344 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-04 01:15:56.867349 | orchestrator | Wednesday 04 February 2026 01:14:56 +0000 (0:00:01.905) 0:00:57.965 **** 2026-02-04 01:15:56.867354 | orchestrator | 2026-02-04 01:15:56.867359 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-04 01:15:56.867367 | orchestrator | Wednesday 04 February 2026 01:14:56 +0000 (0:00:00.075) 0:00:58.041 **** 2026-02-04 01:15:56.867373 | orchestrator | 2026-02-04 01:15:56.867378 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-04 01:15:56.867383 | orchestrator | Wednesday 04 February 2026 01:14:56 +0000 (0:00:00.074) 0:00:58.115 **** 2026-02-04 01:15:56.867389 | orchestrator | 2026-02-04 01:15:56.867394 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-04 01:15:56.867399 | orchestrator | Wednesday 04 February 2026 01:14:56 +0000 (0:00:00.293) 0:00:58.408 **** 2026-02-04 01:15:56.867405 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:56.867411 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:56.867417 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:15:56.867422 | orchestrator | 2026-02-04 01:15:56.867428 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-04 01:15:56.867433 | orchestrator | Wednesday 04 February 2026 01:14:58 +0000 (0:00:02.094) 0:01:00.503 **** 2026-02-04 01:15:56.867439 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:56.867444 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:56.867450 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-04 01:15:56.867455 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-04 01:15:56.867461 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:15:56.867466 | orchestrator | 2026-02-04 01:15:56.867470 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-04 01:15:56.867475 | orchestrator | Wednesday 04 February 2026 01:15:24 +0000 (0:00:26.134) 0:01:26.638 **** 2026-02-04 01:15:56.867481 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:56.867486 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:15:56.867492 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:15:56.867497 | orchestrator | 2026-02-04 01:15:56.867503 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-04 01:15:56.867511 | orchestrator | Wednesday 04 February 2026 01:15:49 +0000 (0:00:24.830) 0:01:51.469 **** 2026-02-04 01:15:56.867517 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:15:56.867523 | orchestrator | 2026-02-04 01:15:56.867529 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-04 01:15:56.867535 | orchestrator | Wednesday 04 February 2026 01:15:52 +0000 (0:00:02.663) 0:01:54.133 **** 2026-02-04 01:15:56.867541 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:56.867547 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:15:56.867553 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:15:56.867558 | orchestrator | 2026-02-04 01:15:56.867564 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-04 01:15:56.867571 | orchestrator | Wednesday 04 February 2026 01:15:52 +0000 (0:00:00.532) 0:01:54.665 **** 2026-02-04 01:15:56.867578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-04 01:15:56.867585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-04 01:15:56.867591 | orchestrator | 2026-02-04 01:15:56.867597 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-04 01:15:56.867604 | orchestrator | Wednesday 04 February 2026 01:15:55 +0000 (0:00:02.505) 0:01:57.171 **** 2026-02-04 01:15:56.867613 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:15:56.867619 | orchestrator | 2026-02-04 01:15:56.867625 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:15:56.867631 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:15:56.867638 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:15:56.867644 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:15:56.867650 | orchestrator | 2026-02-04 01:15:56.867657 | orchestrator | 2026-02-04 01:15:56.867662 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:15:56.867669 | orchestrator | Wednesday 04 February 2026 01:15:56 +0000 (0:00:00.751) 0:01:57.923 **** 2026-02-04 01:15:56.867675 | orchestrator | =============================================================================== 2026-02-04 01:15:56.867681 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.24s 2026-02-04 01:15:56.867687 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.13s 2026-02-04 01:15:56.867693 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.83s 2026-02-04 01:15:56.867698 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.66s 2026-02-04 01:15:56.867704 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.51s 2026-02-04 01:15:56.867710 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.22s 2026-02-04 01:15:56.867720 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.09s 2026-02-04 01:15:56.867727 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 1.91s 2026-02-04 01:15:56.867734 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.63s 2026-02-04 01:15:56.867739 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.40s 2026-02-04 01:15:56.867748 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.37s 2026-02-04 01:15:56.867754 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.36s 2026-02-04 01:15:56.867760 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.21s 2026-02-04 01:15:56.867765 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.15s 2026-02-04 01:15:56.867770 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.03s 2026-02-04 01:15:56.867775 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.00s 2026-02-04 01:15:56.867780 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.82s 2026-02-04 01:15:56.867787 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.81s 2026-02-04 01:15:56.867791 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.80s 2026-02-04 01:15:56.867796 | orchestrator | grafana : Disable Getting Started panel --------------------------------- 0.75s 2026-02-04 01:15:59.911993 | orchestrator | 2026-02-04 01:15:59 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:15:59.914003 | orchestrator | 2026-02-04 01:15:59 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:15:59.914076 | orchestrator | 2026-02-04 01:15:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:02.956120 | orchestrator | 2026-02-04 01:16:02 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:16:02.956915 | orchestrator | 2026-02-04 01:16:02 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:02.956952 | orchestrator | 2026-02-04 01:16:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:06.005264 | orchestrator | 2026-02-04 01:16:06 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:16:06.006532 | orchestrator | 2026-02-04 01:16:06 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:06.006580 | orchestrator | 2026-02-04 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:09.054162 | orchestrator | 2026-02-04 01:16:09 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:16:09.057334 | orchestrator | 2026-02-04 01:16:09 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:09.057376 | orchestrator | 2026-02-04 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:12.101528 | orchestrator | 2026-02-04 01:16:12 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:16:12.102585 | orchestrator | 2026-02-04 01:16:12 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:12.102825 | orchestrator | 2026-02-04 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:15.152920 | orchestrator | 2026-02-04 01:16:15 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:16:15.155214 | orchestrator | 2026-02-04 01:16:15 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:15.155300 | orchestrator | 2026-02-04 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:18.202495 | orchestrator | 2026-02-04 01:16:18 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:16:18.205248 | orchestrator | 2026-02-04 01:16:18 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:18.205312 | orchestrator | 2026-02-04 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:21.244205 | orchestrator | 2026-02-04 01:16:21 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:16:21.246767 | orchestrator | 2026-02-04 01:16:21 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:21.246822 | orchestrator | 2026-02-04 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:24.286200 | orchestrator | 2026-02-04 01:16:24 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:16:24.287092 | orchestrator | 2026-02-04 01:16:24 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:24.287452 | orchestrator | 2026-02-04 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:27.337873 | orchestrator | 2026-02-04 01:16:27 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:16:27.339765 | orchestrator | 2026-02-04 01:16:27 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:27.339817 | orchestrator | 2026-02-04 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:30.378733 | orchestrator | 2026-02-04 01:16:30 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:16:30.378781 | orchestrator | 2026-02-04 01:16:30 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:30.378786 | orchestrator | 2026-02-04 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:33.424621 | orchestrator | 2026-02-04 01:16:33 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state STARTED 2026-02-04 01:16:33.427816 | orchestrator | 2026-02-04 01:16:33 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:33.427880 | orchestrator | 2026-02-04 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:36.483295 | orchestrator | 2026-02-04 01:16:36 | INFO  | Task e1a0af0c-c5dc-4576-858b-027414f2952c is in state SUCCESS 2026-02-04 01:16:36.485259 | orchestrator | 2026-02-04 01:16:36.485302 | orchestrator | 2026-02-04 01:16:36.485308 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:16:36.485314 | orchestrator | 2026-02-04 01:16:36.485319 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-04 01:16:36.485324 | orchestrator | Wednesday 04 February 2026 01:06:28 +0000 (0:00:00.331) 0:00:00.331 **** 2026-02-04 01:16:36.485329 | orchestrator | changed: [testbed-manager] 2026-02-04 01:16:36.485335 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.485340 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:16:36.485346 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:16:36.485351 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.485357 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.485362 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.485367 | orchestrator | 2026-02-04 01:16:36.485372 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:16:36.485377 | orchestrator | Wednesday 04 February 2026 01:06:29 +0000 (0:00:00.821) 0:00:01.152 **** 2026-02-04 01:16:36.485383 | orchestrator | changed: [testbed-manager] 2026-02-04 01:16:36.485388 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.485393 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:16:36.485398 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:16:36.485403 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.485408 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.485413 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.485419 | orchestrator | 2026-02-04 01:16:36.485424 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:16:36.485429 | orchestrator | Wednesday 04 February 2026 01:06:29 +0000 (0:00:00.644) 0:00:01.797 **** 2026-02-04 01:16:36.485449 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-04 01:16:36.485455 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-04 01:16:36.485462 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-04 01:16:36.485469 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-04 01:16:36.485525 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-04 01:16:36.485531 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-04 01:16:36.485536 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-04 01:16:36.485546 | orchestrator | 2026-02-04 01:16:36.485559 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-04 01:16:36.485565 | orchestrator | 2026-02-04 01:16:36.485582 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-04 01:16:36.485588 | orchestrator | Wednesday 04 February 2026 01:06:30 +0000 (0:00:00.803) 0:00:02.600 **** 2026-02-04 01:16:36.485599 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:16:36.485605 | orchestrator | 2026-02-04 01:16:36.485610 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-04 01:16:36.485615 | orchestrator | Wednesday 04 February 2026 01:06:31 +0000 (0:00:00.644) 0:00:03.245 **** 2026-02-04 01:16:36.485621 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-04 01:16:36.485627 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-04 01:16:36.485632 | orchestrator | 2026-02-04 01:16:36.485637 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-04 01:16:36.485643 | orchestrator | Wednesday 04 February 2026 01:06:35 +0000 (0:00:04.697) 0:00:07.942 **** 2026-02-04 01:16:36.485648 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:16:36.485660 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:16:36.485666 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.485671 | orchestrator | 2026-02-04 01:16:36.485676 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-04 01:16:36.485701 | orchestrator | Wednesday 04 February 2026 01:06:40 +0000 (0:00:04.078) 0:00:12.021 **** 2026-02-04 01:16:36.485707 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.485713 | orchestrator | 2026-02-04 01:16:36.485718 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-04 01:16:36.485724 | orchestrator | Wednesday 04 February 2026 01:06:40 +0000 (0:00:00.886) 0:00:12.907 **** 2026-02-04 01:16:36.485741 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.485746 | orchestrator | 2026-02-04 01:16:36.485752 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-04 01:16:36.485758 | orchestrator | Wednesday 04 February 2026 01:06:43 +0000 (0:00:02.378) 0:00:15.285 **** 2026-02-04 01:16:36.485763 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.485780 | orchestrator | 2026-02-04 01:16:36.485806 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 01:16:36.485810 | orchestrator | Wednesday 04 February 2026 01:06:49 +0000 (0:00:06.212) 0:00:21.498 **** 2026-02-04 01:16:36.485813 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.485816 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.485820 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.485823 | orchestrator | 2026-02-04 01:16:36.485826 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-04 01:16:36.485829 | orchestrator | Wednesday 04 February 2026 01:06:50 +0000 (0:00:00.602) 0:00:22.101 **** 2026-02-04 01:16:36.485832 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:16:36.485836 | orchestrator | 2026-02-04 01:16:36.485839 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-04 01:16:36.485842 | orchestrator | Wednesday 04 February 2026 01:07:21 +0000 (0:00:31.523) 0:00:53.625 **** 2026-02-04 01:16:36.485845 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.485853 | orchestrator | 2026-02-04 01:16:36.485856 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-04 01:16:36.485860 | orchestrator | Wednesday 04 February 2026 01:07:39 +0000 (0:00:18.241) 0:01:11.866 **** 2026-02-04 01:16:36.485863 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:16:36.485866 | orchestrator | 2026-02-04 01:16:36.485869 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-04 01:16:36.485872 | orchestrator | Wednesday 04 February 2026 01:07:54 +0000 (0:00:14.789) 0:01:26.656 **** 2026-02-04 01:16:36.485883 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:16:36.485886 | orchestrator | 2026-02-04 01:16:36.485889 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-04 01:16:36.485892 | orchestrator | Wednesday 04 February 2026 01:07:55 +0000 (0:00:01.279) 0:01:27.936 **** 2026-02-04 01:16:36.485895 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.485899 | orchestrator | 2026-02-04 01:16:36.485902 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 01:16:36.485905 | orchestrator | Wednesday 04 February 2026 01:07:56 +0000 (0:00:00.549) 0:01:28.485 **** 2026-02-04 01:16:36.485908 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:16:36.485911 | orchestrator | 2026-02-04 01:16:36.485940 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-04 01:16:36.485944 | orchestrator | Wednesday 04 February 2026 01:07:57 +0000 (0:00:00.555) 0:01:29.041 **** 2026-02-04 01:16:36.485950 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:16:36.485955 | orchestrator | 2026-02-04 01:16:36.485960 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-04 01:16:36.485965 | orchestrator | Wednesday 04 February 2026 01:08:17 +0000 (0:00:20.102) 0:01:49.144 **** 2026-02-04 01:16:36.485971 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.485975 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.485994 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486001 | orchestrator | 2026-02-04 01:16:36.486006 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-04 01:16:36.486035 | orchestrator | 2026-02-04 01:16:36.486042 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-04 01:16:36.486048 | orchestrator | Wednesday 04 February 2026 01:08:17 +0000 (0:00:00.555) 0:01:49.699 **** 2026-02-04 01:16:36.486054 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:16:36.486060 | orchestrator | 2026-02-04 01:16:36.486063 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-04 01:16:36.486066 | orchestrator | Wednesday 04 February 2026 01:08:18 +0000 (0:00:00.675) 0:01:50.375 **** 2026-02-04 01:16:36.486069 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486072 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486075 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.486079 | orchestrator | 2026-02-04 01:16:36.486085 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-04 01:16:36.486088 | orchestrator | Wednesday 04 February 2026 01:08:20 +0000 (0:00:01.974) 0:01:52.349 **** 2026-02-04 01:16:36.486092 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486095 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486098 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.486101 | orchestrator | 2026-02-04 01:16:36.486104 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-04 01:16:36.486107 | orchestrator | Wednesday 04 February 2026 01:08:22 +0000 (0:00:02.077) 0:01:54.426 **** 2026-02-04 01:16:36.486110 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.486114 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486117 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486120 | orchestrator | 2026-02-04 01:16:36.486123 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-04 01:16:36.486130 | orchestrator | Wednesday 04 February 2026 01:08:22 +0000 (0:00:00.422) 0:01:54.848 **** 2026-02-04 01:16:36.486133 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 01:16:36.486136 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486139 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 01:16:36.486142 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486146 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-04 01:16:36.486151 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-04 01:16:36.486156 | orchestrator | 2026-02-04 01:16:36.486161 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-04 01:16:36.486167 | orchestrator | Wednesday 04 February 2026 01:08:30 +0000 (0:00:08.108) 0:02:02.957 **** 2026-02-04 01:16:36.486172 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.486177 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486183 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486208 | orchestrator | 2026-02-04 01:16:36.486213 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-04 01:16:36.486219 | orchestrator | Wednesday 04 February 2026 01:08:31 +0000 (0:00:00.390) 0:02:03.348 **** 2026-02-04 01:16:36.486224 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-04 01:16:36.486230 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.486236 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 01:16:36.486241 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486247 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 01:16:36.486252 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486257 | orchestrator | 2026-02-04 01:16:36.486263 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-04 01:16:36.486268 | orchestrator | Wednesday 04 February 2026 01:08:32 +0000 (0:00:01.149) 0:02:04.497 **** 2026-02-04 01:16:36.486274 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486279 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.486285 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486290 | orchestrator | 2026-02-04 01:16:36.486296 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-04 01:16:36.486302 | orchestrator | Wednesday 04 February 2026 01:08:33 +0000 (0:00:01.372) 0:02:05.869 **** 2026-02-04 01:16:36.486307 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486312 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486318 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.486323 | orchestrator | 2026-02-04 01:16:36.486329 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-04 01:16:36.486334 | orchestrator | Wednesday 04 February 2026 01:08:35 +0000 (0:00:01.534) 0:02:07.404 **** 2026-02-04 01:16:36.486339 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486344 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486354 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.486360 | orchestrator | 2026-02-04 01:16:36.486421 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-04 01:16:36.486430 | orchestrator | Wednesday 04 February 2026 01:08:39 +0000 (0:00:03.975) 0:02:11.380 **** 2026-02-04 01:16:36.486436 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486442 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486447 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:16:36.486453 | orchestrator | 2026-02-04 01:16:36.486458 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-04 01:16:36.486464 | orchestrator | Wednesday 04 February 2026 01:09:00 +0000 (0:00:21.496) 0:02:32.876 **** 2026-02-04 01:16:36.486470 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486476 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486481 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:16:36.486487 | orchestrator | 2026-02-04 01:16:36.486521 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-04 01:16:36.486535 | orchestrator | Wednesday 04 February 2026 01:09:14 +0000 (0:00:13.870) 0:02:46.747 **** 2026-02-04 01:16:36.486541 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:16:36.486547 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486552 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486557 | orchestrator | 2026-02-04 01:16:36.486560 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-04 01:16:36.486563 | orchestrator | Wednesday 04 February 2026 01:09:15 +0000 (0:00:01.118) 0:02:47.866 **** 2026-02-04 01:16:36.486566 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486569 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486572 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.486575 | orchestrator | 2026-02-04 01:16:36.486578 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-04 01:16:36.486582 | orchestrator | Wednesday 04 February 2026 01:09:29 +0000 (0:00:13.712) 0:03:01.578 **** 2026-02-04 01:16:36.486585 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.486588 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486591 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486594 | orchestrator | 2026-02-04 01:16:36.486597 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-04 01:16:36.486600 | orchestrator | Wednesday 04 February 2026 01:09:30 +0000 (0:00:01.148) 0:03:02.726 **** 2026-02-04 01:16:36.486606 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.486609 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486613 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486616 | orchestrator | 2026-02-04 01:16:36.486619 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-04 01:16:36.486622 | orchestrator | 2026-02-04 01:16:36.486625 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 01:16:36.486628 | orchestrator | Wednesday 04 February 2026 01:09:31 +0000 (0:00:00.580) 0:03:03.307 **** 2026-02-04 01:16:36.486632 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:16:36.486635 | orchestrator | 2026-02-04 01:16:36.486638 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-04 01:16:36.486641 | orchestrator | Wednesday 04 February 2026 01:09:31 +0000 (0:00:00.601) 0:03:03.909 **** 2026-02-04 01:16:36.486645 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-04 01:16:36.486648 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-04 01:16:36.486651 | orchestrator | 2026-02-04 01:16:36.486654 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-04 01:16:36.486657 | orchestrator | Wednesday 04 February 2026 01:09:34 +0000 (0:00:02.758) 0:03:06.668 **** 2026-02-04 01:16:36.486660 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-04 01:16:36.486664 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-04 01:16:36.486668 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-04 01:16:36.486671 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-04 01:16:36.486674 | orchestrator | 2026-02-04 01:16:36.486677 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-04 01:16:36.486680 | orchestrator | Wednesday 04 February 2026 01:09:41 +0000 (0:00:06.641) 0:03:13.310 **** 2026-02-04 01:16:36.486683 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:16:36.486687 | orchestrator | 2026-02-04 01:16:36.486690 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-04 01:16:36.486693 | orchestrator | Wednesday 04 February 2026 01:09:44 +0000 (0:00:03.095) 0:03:16.405 **** 2026-02-04 01:16:36.486696 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-04 01:16:36.486702 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:16:36.486705 | orchestrator | 2026-02-04 01:16:36.486708 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-04 01:16:36.486711 | orchestrator | Wednesday 04 February 2026 01:09:48 +0000 (0:00:04.422) 0:03:20.828 **** 2026-02-04 01:16:36.486714 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:16:36.486717 | orchestrator | 2026-02-04 01:16:36.486720 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-04 01:16:36.486724 | orchestrator | Wednesday 04 February 2026 01:09:52 +0000 (0:00:03.366) 0:03:24.194 **** 2026-02-04 01:16:36.486727 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-04 01:16:36.486730 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-04 01:16:36.486733 | orchestrator | 2026-02-04 01:16:36.486736 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-04 01:16:36.486743 | orchestrator | Wednesday 04 February 2026 01:09:59 +0000 (0:00:07.704) 0:03:31.898 **** 2026-02-04 01:16:36.486749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.486756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.486760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.486770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.486775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.486779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.486782 | orchestrator | 2026-02-04 01:16:36.486785 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-04 01:16:36.486788 | orchestrator | Wednesday 04 February 2026 01:10:01 +0000 (0:00:01.532) 0:03:33.431 **** 2026-02-04 01:16:36.486793 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.486796 | orchestrator | 2026-02-04 01:16:36.486799 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-04 01:16:36.486803 | orchestrator | Wednesday 04 February 2026 01:10:01 +0000 (0:00:00.256) 0:03:33.688 **** 2026-02-04 01:16:36.486806 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.486809 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486812 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486815 | orchestrator | 2026-02-04 01:16:36.486819 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-04 01:16:36.486822 | orchestrator | Wednesday 04 February 2026 01:10:02 +0000 (0:00:01.026) 0:03:34.714 **** 2026-02-04 01:16:36.486825 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:16:36.486828 | orchestrator | 2026-02-04 01:16:36.486831 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-04 01:16:36.486834 | orchestrator | Wednesday 04 February 2026 01:10:04 +0000 (0:00:01.452) 0:03:36.167 **** 2026-02-04 01:16:36.486838 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.486843 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486847 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486850 | orchestrator | 2026-02-04 01:16:36.486853 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 01:16:36.486856 | orchestrator | Wednesday 04 February 2026 01:10:04 +0000 (0:00:00.591) 0:03:36.758 **** 2026-02-04 01:16:36.486860 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:16:36.486863 | orchestrator | 2026-02-04 01:16:36.486866 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-04 01:16:36.486869 | orchestrator | Wednesday 04 February 2026 01:10:05 +0000 (0:00:00.615) 0:03:37.374 **** 2026-02-04 01:16:36.486873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.486879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.486884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.486889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.486896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.486902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.486905 | orchestrator | 2026-02-04 01:16:36.486908 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-04 01:16:36.486911 | orchestrator | Wednesday 04 February 2026 01:10:08 +0000 (0:00:03.379) 0:03:40.753 **** 2026-02-04 01:16:36.486938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:16:36.486946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.486952 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.486956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:16:36.486959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.486962 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.486969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:16:36.486976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.486986 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.486992 | orchestrator | 2026-02-04 01:16:36.486998 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-04 01:16:36.487003 | orchestrator | Wednesday 04 February 2026 01:10:09 +0000 (0:00:00.899) 0:03:41.653 **** 2026-02-04 01:16:36.487010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:16:36.487017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.487028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:16:36.487035 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.487040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.487050 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.487058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:16:36.487064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.487070 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.487076 | orchestrator | 2026-02-04 01:16:36.487081 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-04 01:16:36.487086 | orchestrator | Wednesday 04 February 2026 01:10:11 +0000 (0:00:01.701) 0:03:43.354 **** 2026-02-04 01:16:36.487096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.487104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.487114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.487120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.487129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.487136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.487141 | orchestrator | 2026-02-04 01:16:36.487147 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-04 01:16:36.487153 | orchestrator | Wednesday 04 February 2026 01:10:15 +0000 (0:00:04.322) 0:03:47.676 **** 2026-02-04 01:16:36.487164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.487170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.487180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.487234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.487249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.487256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.487262 | orchestrator | 2026-02-04 01:16:36.487268 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-04 01:16:36.487274 | orchestrator | Wednesday 04 February 2026 01:10:27 +0000 (0:00:11.725) 0:03:59.401 **** 2026-02-04 01:16:36.487280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:16:36.487290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.487297 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.487303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:16:36.487315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.487322 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.487328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:16:36.487334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.487339 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.487391 | orchestrator | 2026-02-04 01:16:36.487397 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-04 01:16:36.487403 | orchestrator | Wednesday 04 February 2026 01:10:28 +0000 (0:00:01.267) 0:04:00.669 **** 2026-02-04 01:16:36.487409 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.487414 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:16:36.487420 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:16:36.487425 | orchestrator | 2026-02-04 01:16:36.487433 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-04 01:16:36.487441 | orchestrator | Wednesday 04 February 2026 01:10:30 +0000 (0:00:02.154) 0:04:02.824 **** 2026-02-04 01:16:36.487445 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.487448 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.487451 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.487454 | orchestrator | 2026-02-04 01:16:36.487457 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-04 01:16:36.487460 | orchestrator | Wednesday 04 February 2026 01:10:31 +0000 (0:00:00.599) 0:04:03.424 **** 2026-02-04 01:16:36.487467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.487471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.487475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.487481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:16:36.487490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.487497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.487503 | orchestrator | 2026-02-04 01:16:36.487523 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-04 01:16:36.487529 | orchestrator | Wednesday 04 February 2026 01:10:36 +0000 (0:00:04.810) 0:04:08.236 **** 2026-02-04 01:16:36.487535 | orchestrator | 2026-02-04 01:16:36.487540 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-04 01:16:36.487546 | orchestrator | Wednesday 04 February 2026 01:10:36 +0000 (0:00:00.676) 0:04:08.913 **** 2026-02-04 01:16:36.487550 | orchestrator | 2026-02-04 01:16:36.487554 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-04 01:16:36.487557 | orchestrator | Wednesday 04 February 2026 01:10:37 +0000 (0:00:00.530) 0:04:09.443 **** 2026-02-04 01:16:36.487560 | orchestrator | 2026-02-04 01:16:36.487563 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-04 01:16:36.487566 | orchestrator | Wednesday 04 February 2026 01:10:37 +0000 (0:00:00.295) 0:04:09.739 **** 2026-02-04 01:16:36.487569 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.487572 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:16:36.487575 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:16:36.487578 | orchestrator | 2026-02-04 01:16:36.487581 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-04 01:16:36.487585 | orchestrator | Wednesday 04 February 2026 01:10:58 +0000 (0:00:20.785) 0:04:30.524 **** 2026-02-04 01:16:36.487588 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.487591 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:16:36.487594 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:16:36.487597 | orchestrator | 2026-02-04 01:16:36.487606 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-04 01:16:36.487609 | orchestrator | 2026-02-04 01:16:36.487615 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 01:16:36.487619 | orchestrator | Wednesday 04 February 2026 01:11:12 +0000 (0:00:13.751) 0:04:44.276 **** 2026-02-04 01:16:36.487625 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:16:36.487628 | orchestrator | 2026-02-04 01:16:36.487632 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 01:16:36.487635 | orchestrator | Wednesday 04 February 2026 01:11:14 +0000 (0:00:02.267) 0:04:46.544 **** 2026-02-04 01:16:36.487638 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.487641 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.487644 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.487647 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.487651 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.487654 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.487657 | orchestrator | 2026-02-04 01:16:36.487660 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-04 01:16:36.487663 | orchestrator | Wednesday 04 February 2026 01:11:15 +0000 (0:00:00.956) 0:04:47.500 **** 2026-02-04 01:16:36.487666 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.487669 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.487672 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.487676 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:16:36.487679 | orchestrator | 2026-02-04 01:16:36.487682 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-04 01:16:36.487688 | orchestrator | Wednesday 04 February 2026 01:11:18 +0000 (0:00:02.769) 0:04:50.269 **** 2026-02-04 01:16:36.487691 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-04 01:16:36.487695 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-04 01:16:36.487698 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-04 01:16:36.487701 | orchestrator | 2026-02-04 01:16:36.487705 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-04 01:16:36.487710 | orchestrator | Wednesday 04 February 2026 01:11:19 +0000 (0:00:01.300) 0:04:51.570 **** 2026-02-04 01:16:36.487716 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-04 01:16:36.487722 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-04 01:16:36.487727 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-04 01:16:36.487751 | orchestrator | 2026-02-04 01:16:36.487757 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-04 01:16:36.487763 | orchestrator | Wednesday 04 February 2026 01:11:21 +0000 (0:00:01.910) 0:04:53.480 **** 2026-02-04 01:16:36.487768 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-04 01:16:36.487773 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.487779 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-04 01:16:36.487784 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.487798 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-04 01:16:36.487804 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.487809 | orchestrator | 2026-02-04 01:16:36.487812 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-04 01:16:36.487815 | orchestrator | Wednesday 04 February 2026 01:11:22 +0000 (0:00:00.554) 0:04:54.035 **** 2026-02-04 01:16:36.487818 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:16:36.487821 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:16:36.487824 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-04 01:16:36.487828 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-04 01:16:36.487831 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.487836 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-04 01:16:36.487839 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:16:36.487845 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:16:36.487848 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.487851 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-04 01:16:36.487854 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:16:36.487858 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:16:36.487861 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.487864 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-04 01:16:36.487867 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-04 01:16:36.487870 | orchestrator | 2026-02-04 01:16:36.487873 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-04 01:16:36.487876 | orchestrator | Wednesday 04 February 2026 01:11:23 +0000 (0:00:01.532) 0:04:55.568 **** 2026-02-04 01:16:36.487879 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.487882 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.487885 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.487888 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.487891 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.487894 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.487897 | orchestrator | 2026-02-04 01:16:36.487901 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-04 01:16:36.487904 | orchestrator | Wednesday 04 February 2026 01:11:24 +0000 (0:00:01.156) 0:04:56.725 **** 2026-02-04 01:16:36.487907 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.487910 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.487913 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.487930 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.487933 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.487937 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.487940 | orchestrator | 2026-02-04 01:16:36.487943 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-04 01:16:36.487946 | orchestrator | Wednesday 04 February 2026 01:11:27 +0000 (0:00:02.644) 0:04:59.369 **** 2026-02-04 01:16:36.487950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488233 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488273 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488280 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488316 | orchestrator | 2026-02-04 01:16:36.488319 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 01:16:36.488322 | orchestrator | Wednesday 04 February 2026 01:11:31 +0000 (0:00:03.612) 0:05:02.981 **** 2026-02-04 01:16:36.488326 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:16:36.488329 | orchestrator | 2026-02-04 01:16:36.488333 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-04 01:16:36.488337 | orchestrator | Wednesday 04 February 2026 01:11:32 +0000 (0:00:01.900) 0:05:04.882 **** 2026-02-04 01:16:36.488341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488344 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488367 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.488409 | orchestrator | 2026-02-04 01:16:36.488412 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-04 01:16:36.488415 | orchestrator | Wednesday 04 February 2026 01:11:37 +0000 (0:00:04.996) 0:05:09.879 **** 2026-02-04 01:16:36.488423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.488427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.488432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488435 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.488439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.488442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.488446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488452 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.488456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.488459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.488464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488467 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.488470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:16:36.488474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488479 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.488484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:16:36.488487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488490 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.488493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:16:36.488498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488501 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.488505 | orchestrator | 2026-02-04 01:16:36.488508 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-04 01:16:36.488511 | orchestrator | Wednesday 04 February 2026 01:11:40 +0000 (0:00:02.604) 0:05:12.483 **** 2026-02-04 01:16:36.488514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.488522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.488527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488531 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.488535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.488543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.488548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488554 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.488559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:16:36.488579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488585 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.488594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:16:36.488600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.488607 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.488611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.488614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488620 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.488623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:16:36.488628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.488631 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.488634 | orchestrator | 2026-02-04 01:16:36.488637 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 01:16:36.488641 | orchestrator | Wednesday 04 February 2026 01:11:44 +0000 (0:00:04.107) 0:05:16.590 **** 2026-02-04 01:16:36.488659 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.488663 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.488666 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.488669 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:16:36.488672 | orchestrator | 2026-02-04 01:16:36.488675 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-04 01:16:36.488678 | orchestrator | Wednesday 04 February 2026 01:11:46 +0000 (0:00:01.608) 0:05:18.199 **** 2026-02-04 01:16:36.488681 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 01:16:36.488685 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 01:16:36.488688 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 01:16:36.488691 | orchestrator | 2026-02-04 01:16:36.488694 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-04 01:16:36.488697 | orchestrator | Wednesday 04 February 2026 01:11:48 +0000 (0:00:02.771) 0:05:20.971 **** 2026-02-04 01:16:36.488700 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 01:16:36.488703 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 01:16:36.488706 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 01:16:36.488710 | orchestrator | 2026-02-04 01:16:36.488713 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-04 01:16:36.488716 | orchestrator | Wednesday 04 February 2026 01:11:51 +0000 (0:00:02.871) 0:05:23.842 **** 2026-02-04 01:16:36.488719 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:16:36.488723 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:16:36.488726 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:16:36.488729 | orchestrator | 2026-02-04 01:16:36.488734 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-04 01:16:36.488737 | orchestrator | Wednesday 04 February 2026 01:11:52 +0000 (0:00:00.926) 0:05:24.769 **** 2026-02-04 01:16:36.488743 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:16:36.488747 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:16:36.488750 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:16:36.488753 | orchestrator | 2026-02-04 01:16:36.488756 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-04 01:16:36.488759 | orchestrator | Wednesday 04 February 2026 01:11:54 +0000 (0:00:01.338) 0:05:26.107 **** 2026-02-04 01:16:36.488762 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-04 01:16:36.488766 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-04 01:16:36.488769 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-04 01:16:36.488772 | orchestrator | 2026-02-04 01:16:36.488775 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-04 01:16:36.488778 | orchestrator | Wednesday 04 February 2026 01:11:55 +0000 (0:00:01.619) 0:05:27.727 **** 2026-02-04 01:16:36.488782 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-04 01:16:36.488785 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-04 01:16:36.488788 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-04 01:16:36.488791 | orchestrator | 2026-02-04 01:16:36.488794 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-04 01:16:36.488797 | orchestrator | Wednesday 04 February 2026 01:11:57 +0000 (0:00:01.548) 0:05:29.275 **** 2026-02-04 01:16:36.488800 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-04 01:16:36.488803 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-04 01:16:36.488806 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-04 01:16:36.488809 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-04 01:16:36.488812 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-04 01:16:36.488815 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-04 01:16:36.488818 | orchestrator | 2026-02-04 01:16:36.488822 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-04 01:16:36.488825 | orchestrator | Wednesday 04 February 2026 01:12:04 +0000 (0:00:06.714) 0:05:35.990 **** 2026-02-04 01:16:36.488828 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.488831 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.488834 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.488837 | orchestrator | 2026-02-04 01:16:36.488840 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-04 01:16:36.488843 | orchestrator | Wednesday 04 February 2026 01:12:04 +0000 (0:00:00.764) 0:05:36.754 **** 2026-02-04 01:16:36.488847 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.488850 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.488853 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.488856 | orchestrator | 2026-02-04 01:16:36.488860 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-04 01:16:36.488863 | orchestrator | Wednesday 04 February 2026 01:12:05 +0000 (0:00:00.503) 0:05:37.258 **** 2026-02-04 01:16:36.488867 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.488870 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.488874 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.488878 | orchestrator | 2026-02-04 01:16:36.488881 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-04 01:16:36.488885 | orchestrator | Wednesday 04 February 2026 01:12:09 +0000 (0:00:04.094) 0:05:41.353 **** 2026-02-04 01:16:36.488890 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-04 01:16:36.488895 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-04 01:16:36.488898 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-04 01:16:36.488905 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-04 01:16:36.488909 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-04 01:16:36.488913 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-04 01:16:36.488959 | orchestrator | 2026-02-04 01:16:36.488963 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-04 01:16:36.488967 | orchestrator | Wednesday 04 February 2026 01:12:14 +0000 (0:00:05.200) 0:05:46.554 **** 2026-02-04 01:16:36.488971 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 01:16:36.488974 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 01:16:36.488978 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 01:16:36.488982 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 01:16:36.488985 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.488989 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 01:16:36.488993 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.488998 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 01:16:36.489003 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.489006 | orchestrator | 2026-02-04 01:16:36.489010 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-04 01:16:36.489014 | orchestrator | Wednesday 04 February 2026 01:12:22 +0000 (0:00:07.822) 0:05:54.376 **** 2026-02-04 01:16:36.489017 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.489021 | orchestrator | 2026-02-04 01:16:36.489028 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-04 01:16:36.489031 | orchestrator | Wednesday 04 February 2026 01:12:22 +0000 (0:00:00.198) 0:05:54.575 **** 2026-02-04 01:16:36.489035 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.489039 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.489042 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.489046 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.489049 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.489053 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.489057 | orchestrator | 2026-02-04 01:16:36.489060 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-04 01:16:36.489064 | orchestrator | Wednesday 04 February 2026 01:12:23 +0000 (0:00:00.679) 0:05:55.255 **** 2026-02-04 01:16:36.489068 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 01:16:36.489071 | orchestrator | 2026-02-04 01:16:36.489075 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-04 01:16:36.489079 | orchestrator | Wednesday 04 February 2026 01:12:24 +0000 (0:00:00.793) 0:05:56.048 **** 2026-02-04 01:16:36.489083 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.489086 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.489090 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.489094 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.489098 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.489101 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.489105 | orchestrator | 2026-02-04 01:16:36.489109 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-04 01:16:36.489113 | orchestrator | Wednesday 04 February 2026 01:12:24 +0000 (0:00:00.735) 0:05:56.783 **** 2026-02-04 01:16:36.489117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489150 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489176 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489188 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489192 | orchestrator | 2026-02-04 01:16:36.489196 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-04 01:16:36.489199 | orchestrator | Wednesday 04 February 2026 01:12:28 +0000 (0:00:03.973) 0:06:00.757 **** 2026-02-04 01:16:36.489205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.489209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.489212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.489218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.489225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.489229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.489234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.489334 | orchestrator | 2026-02-04 01:16:36.489339 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-04 01:16:36.489344 | orchestrator | Wednesday 04 February 2026 01:12:37 +0000 (0:00:08.491) 0:06:09.249 **** 2026-02-04 01:16:36.489349 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.489354 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.489360 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.489364 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.489369 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.489374 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.489378 | orchestrator | 2026-02-04 01:16:36.489383 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-04 01:16:36.489387 | orchestrator | Wednesday 04 February 2026 01:12:39 +0000 (0:00:02.558) 0:06:11.808 **** 2026-02-04 01:16:36.489393 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-04 01:16:36.489397 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-04 01:16:36.489402 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-04 01:16:36.489407 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-04 01:16:36.489412 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-04 01:16:36.489417 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-04 01:16:36.489422 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-04 01:16:36.489427 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.489436 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-04 01:16:36.489441 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.489449 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-04 01:16:36.489455 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.489460 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-04 01:16:36.489465 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-04 01:16:36.489469 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-04 01:16:36.489474 | orchestrator | 2026-02-04 01:16:36.489480 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-04 01:16:36.489485 | orchestrator | Wednesday 04 February 2026 01:12:45 +0000 (0:00:05.931) 0:06:17.740 **** 2026-02-04 01:16:36.489491 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.489496 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.489501 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.489506 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.489511 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.489514 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.489518 | orchestrator | 2026-02-04 01:16:36.489521 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-04 01:16:36.489528 | orchestrator | Wednesday 04 February 2026 01:12:46 +0000 (0:00:00.611) 0:06:18.351 **** 2026-02-04 01:16:36.489531 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-04 01:16:36.489534 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-04 01:16:36.489537 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-04 01:16:36.489541 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-04 01:16:36.489546 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-04 01:16:36.489549 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-04 01:16:36.489552 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-04 01:16:36.489556 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-04 01:16:36.489559 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-04 01:16:36.489562 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:16:36.489565 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:16:36.489568 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-04 01:16:36.489571 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.489575 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:16:36.489578 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-04 01:16:36.489581 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.489584 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-04 01:16:36.489587 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.489591 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:16:36.489594 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:16:36.489597 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:16:36.489600 | orchestrator | 2026-02-04 01:16:36.489603 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-04 01:16:36.489606 | orchestrator | Wednesday 04 February 2026 01:12:53 +0000 (0:00:07.341) 0:06:25.692 **** 2026-02-04 01:16:36.489610 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 01:16:36.489613 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 01:16:36.489616 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 01:16:36.489619 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 01:16:36.489622 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 01:16:36.489625 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 01:16:36.489631 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-04 01:16:36.489638 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-04 01:16:36.489641 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-04 01:16:36.489645 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 01:16:36.489648 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 01:16:36.489651 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 01:16:36.489654 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 01:16:36.489657 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-04 01:16:36.489660 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.489663 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 01:16:36.489667 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-04 01:16:36.489670 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.489673 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-04 01:16:36.489676 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.489679 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 01:16:36.489682 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 01:16:36.489686 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 01:16:36.489689 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 01:16:36.489692 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 01:16:36.489698 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 01:16:36.489702 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 01:16:36.489705 | orchestrator | 2026-02-04 01:16:36.489708 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-04 01:16:36.489711 | orchestrator | Wednesday 04 February 2026 01:13:00 +0000 (0:00:07.162) 0:06:32.855 **** 2026-02-04 01:16:36.489714 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.489718 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.489721 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.489726 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.489732 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.489737 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.489742 | orchestrator | 2026-02-04 01:16:36.489746 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-04 01:16:36.489751 | orchestrator | Wednesday 04 February 2026 01:13:01 +0000 (0:00:00.873) 0:06:33.729 **** 2026-02-04 01:16:36.489756 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.489762 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.489767 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.489772 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.489777 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.489782 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.489785 | orchestrator | 2026-02-04 01:16:36.489789 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-04 01:16:36.489792 | orchestrator | Wednesday 04 February 2026 01:13:02 +0000 (0:00:00.696) 0:06:34.426 **** 2026-02-04 01:16:36.489795 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.489798 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.489801 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.489804 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.489810 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.489813 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.489818 | orchestrator | 2026-02-04 01:16:36.489823 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-04 01:16:36.489828 | orchestrator | Wednesday 04 February 2026 01:13:05 +0000 (0:00:02.794) 0:06:37.220 **** 2026-02-04 01:16:36.489834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.489843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.489850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.489855 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.489863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.489869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.489877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.489880 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.489886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:16:36.489890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:16:36.489895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.489899 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.489903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:16:36.489908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.489930 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.489935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:16:36.489941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.489949 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.489955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:16:36.489961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:16:36.489967 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.489972 | orchestrator | 2026-02-04 01:16:36.489977 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-04 01:16:36.489983 | orchestrator | Wednesday 04 February 2026 01:13:07 +0000 (0:00:01.879) 0:06:39.100 **** 2026-02-04 01:16:36.489989 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-04 01:16:36.489994 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-04 01:16:36.490000 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.490005 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-04 01:16:36.490008 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-04 01:16:36.490041 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.490048 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-04 01:16:36.490058 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-04 01:16:36.490063 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.490068 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-04 01:16:36.490073 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-04 01:16:36.490078 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.490083 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-04 01:16:36.490088 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-04 01:16:36.490094 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.490099 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-04 01:16:36.490105 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-04 01:16:36.490110 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.490115 | orchestrator | 2026-02-04 01:16:36.490120 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-04 01:16:36.490125 | orchestrator | Wednesday 04 February 2026 01:13:08 +0000 (0:00:01.107) 0:06:40.207 **** 2026-02-04 01:16:36.490131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490174 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:16:36.490234 | orchestrator | 2026-02-04 01:16:36.490239 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 01:16:36.490244 | orchestrator | Wednesday 04 February 2026 01:13:12 +0000 (0:00:04.007) 0:06:44.215 **** 2026-02-04 01:16:36.490249 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.490255 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.490260 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.490265 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.490273 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.490278 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.490283 | orchestrator | 2026-02-04 01:16:36.490288 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:16:36.490294 | orchestrator | Wednesday 04 February 2026 01:13:13 +0000 (0:00:01.139) 0:06:45.354 **** 2026-02-04 01:16:36.490299 | orchestrator | 2026-02-04 01:16:36.490304 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:16:36.490309 | orchestrator | Wednesday 04 February 2026 01:13:13 +0000 (0:00:00.237) 0:06:45.592 **** 2026-02-04 01:16:36.490314 | orchestrator | 2026-02-04 01:16:36.490319 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:16:36.490324 | orchestrator | Wednesday 04 February 2026 01:13:13 +0000 (0:00:00.321) 0:06:45.913 **** 2026-02-04 01:16:36.490329 | orchestrator | 2026-02-04 01:16:36.490334 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:16:36.490339 | orchestrator | Wednesday 04 February 2026 01:13:14 +0000 (0:00:00.247) 0:06:46.161 **** 2026-02-04 01:16:36.490345 | orchestrator | 2026-02-04 01:16:36.490352 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:16:36.490357 | orchestrator | Wednesday 04 February 2026 01:13:14 +0000 (0:00:00.172) 0:06:46.334 **** 2026-02-04 01:16:36.490362 | orchestrator | 2026-02-04 01:16:36.490368 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:16:36.490373 | orchestrator | Wednesday 04 February 2026 01:13:14 +0000 (0:00:00.183) 0:06:46.518 **** 2026-02-04 01:16:36.490378 | orchestrator | 2026-02-04 01:16:36.490383 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-04 01:16:36.490389 | orchestrator | Wednesday 04 February 2026 01:13:14 +0000 (0:00:00.445) 0:06:46.963 **** 2026-02-04 01:16:36.490394 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.490399 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:16:36.490404 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:16:36.490409 | orchestrator | 2026-02-04 01:16:36.490414 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-04 01:16:36.490419 | orchestrator | Wednesday 04 February 2026 01:13:26 +0000 (0:00:11.736) 0:06:58.699 **** 2026-02-04 01:16:36.490425 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.490430 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:16:36.490435 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:16:36.490440 | orchestrator | 2026-02-04 01:16:36.490446 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-04 01:16:36.490451 | orchestrator | Wednesday 04 February 2026 01:13:43 +0000 (0:00:16.991) 0:07:15.690 **** 2026-02-04 01:16:36.490456 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.490461 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.490466 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.490472 | orchestrator | 2026-02-04 01:16:36.490477 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-04 01:16:36.490482 | orchestrator | Wednesday 04 February 2026 01:14:20 +0000 (0:00:37.098) 0:07:52.788 **** 2026-02-04 01:16:36.490487 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.490492 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.490497 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.490502 | orchestrator | 2026-02-04 01:16:36.490507 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-04 01:16:36.490513 | orchestrator | Wednesday 04 February 2026 01:14:52 +0000 (0:00:31.421) 0:08:24.210 **** 2026-02-04 01:16:36.490518 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.490523 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.490528 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.490533 | orchestrator | 2026-02-04 01:16:36.490538 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-04 01:16:36.490543 | orchestrator | Wednesday 04 February 2026 01:14:53 +0000 (0:00:00.831) 0:08:25.041 **** 2026-02-04 01:16:36.490550 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.490555 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.490560 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.490565 | orchestrator | 2026-02-04 01:16:36.490571 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-04 01:16:36.490576 | orchestrator | Wednesday 04 February 2026 01:14:53 +0000 (0:00:00.666) 0:08:25.708 **** 2026-02-04 01:16:36.490581 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:16:36.490586 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:16:36.490591 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:16:36.490597 | orchestrator | 2026-02-04 01:16:36.490602 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-04 01:16:36.490607 | orchestrator | Wednesday 04 February 2026 01:15:18 +0000 (0:00:24.299) 0:08:50.008 **** 2026-02-04 01:16:36.490612 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.490617 | orchestrator | 2026-02-04 01:16:36.490622 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-04 01:16:36.490630 | orchestrator | Wednesday 04 February 2026 01:15:18 +0000 (0:00:00.132) 0:08:50.140 **** 2026-02-04 01:16:36.490636 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.490641 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.490646 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.490651 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.490656 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.490661 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-04 01:16:36.490667 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:16:36.490672 | orchestrator | 2026-02-04 01:16:36.490677 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-04 01:16:36.490682 | orchestrator | Wednesday 04 February 2026 01:15:41 +0000 (0:00:23.435) 0:09:13.575 **** 2026-02-04 01:16:36.490687 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.490692 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.490697 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.490703 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.490708 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.490714 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.490719 | orchestrator | 2026-02-04 01:16:36.490724 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-04 01:16:36.490729 | orchestrator | Wednesday 04 February 2026 01:15:52 +0000 (0:00:11.314) 0:09:24.890 **** 2026-02-04 01:16:36.490734 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.490740 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.490745 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.490750 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.490755 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.490758 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-02-04 01:16:36.490761 | orchestrator | 2026-02-04 01:16:36.490764 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-04 01:16:36.490767 | orchestrator | Wednesday 04 February 2026 01:15:58 +0000 (0:00:05.449) 0:09:30.339 **** 2026-02-04 01:16:36.490770 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:16:36.490773 | orchestrator | 2026-02-04 01:16:36.490777 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-04 01:16:36.490784 | orchestrator | Wednesday 04 February 2026 01:16:11 +0000 (0:00:13.468) 0:09:43.808 **** 2026-02-04 01:16:36.490787 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:16:36.490790 | orchestrator | 2026-02-04 01:16:36.490793 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-04 01:16:36.490800 | orchestrator | Wednesday 04 February 2026 01:16:13 +0000 (0:00:01.532) 0:09:45.340 **** 2026-02-04 01:16:36.490803 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.490807 | orchestrator | 2026-02-04 01:16:36.490810 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-04 01:16:36.490813 | orchestrator | Wednesday 04 February 2026 01:16:14 +0000 (0:00:01.465) 0:09:46.805 **** 2026-02-04 01:16:36.490816 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:16:36.490819 | orchestrator | 2026-02-04 01:16:36.490822 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-04 01:16:36.490825 | orchestrator | Wednesday 04 February 2026 01:16:26 +0000 (0:00:12.041) 0:09:58.847 **** 2026-02-04 01:16:36.490828 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:16:36.490832 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:16:36.490835 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:16:36.490838 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:16:36.490841 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:16:36.490844 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:16:36.490847 | orchestrator | 2026-02-04 01:16:36.490850 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-04 01:16:36.490853 | orchestrator | 2026-02-04 01:16:36.490857 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-04 01:16:36.490860 | orchestrator | Wednesday 04 February 2026 01:16:28 +0000 (0:00:01.931) 0:10:00.778 **** 2026-02-04 01:16:36.490863 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:16:36.490866 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:16:36.490869 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:16:36.490872 | orchestrator | 2026-02-04 01:16:36.490875 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-04 01:16:36.490879 | orchestrator | 2026-02-04 01:16:36.490882 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-04 01:16:36.490885 | orchestrator | Wednesday 04 February 2026 01:16:29 +0000 (0:00:01.177) 0:10:01.956 **** 2026-02-04 01:16:36.490888 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.490891 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.490894 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.490897 | orchestrator | 2026-02-04 01:16:36.490900 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-04 01:16:36.490903 | orchestrator | 2026-02-04 01:16:36.490907 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-04 01:16:36.490910 | orchestrator | Wednesday 04 February 2026 01:16:30 +0000 (0:00:00.562) 0:10:02.518 **** 2026-02-04 01:16:36.490921 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-04 01:16:36.490927 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-04 01:16:36.490931 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-04 01:16:36.490934 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-04 01:16:36.490937 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-04 01:16:36.490941 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-04 01:16:36.490944 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:16:36.490947 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-04 01:16:36.490950 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-04 01:16:36.490955 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-04 01:16:36.490959 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-04 01:16:36.490962 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-04 01:16:36.490965 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-04 01:16:36.490968 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:16:36.490971 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-04 01:16:36.490974 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-04 01:16:36.490980 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-04 01:16:36.490983 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-04 01:16:36.490986 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-04 01:16:36.490989 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-04 01:16:36.490992 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:16:36.490995 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-04 01:16:36.490998 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-04 01:16:36.491002 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-04 01:16:36.491005 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-04 01:16:36.491008 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-04 01:16:36.491011 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-04 01:16:36.491014 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.491017 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-04 01:16:36.491020 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-04 01:16:36.491023 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-04 01:16:36.491026 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-04 01:16:36.491030 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-04 01:16:36.491033 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-04 01:16:36.491036 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.491041 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-04 01:16:36.491044 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-04 01:16:36.491047 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-04 01:16:36.491050 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-04 01:16:36.491053 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-04 01:16:36.491056 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-04 01:16:36.491060 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.491063 | orchestrator | 2026-02-04 01:16:36.491066 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-04 01:16:36.491069 | orchestrator | 2026-02-04 01:16:36.491072 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-04 01:16:36.491075 | orchestrator | Wednesday 04 February 2026 01:16:32 +0000 (0:00:01.548) 0:10:04.066 **** 2026-02-04 01:16:36.491079 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-04 01:16:36.491082 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-04 01:16:36.491085 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.491088 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-04 01:16:36.491091 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-04 01:16:36.491094 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.491097 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-04 01:16:36.491100 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-04 01:16:36.491104 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.491107 | orchestrator | 2026-02-04 01:16:36.491110 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-04 01:16:36.491113 | orchestrator | 2026-02-04 01:16:36.491116 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-04 01:16:36.491119 | orchestrator | Wednesday 04 February 2026 01:16:33 +0000 (0:00:01.003) 0:10:05.070 **** 2026-02-04 01:16:36.491122 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.491126 | orchestrator | 2026-02-04 01:16:36.491129 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-04 01:16:36.491134 | orchestrator | 2026-02-04 01:16:36.491138 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-04 01:16:36.491141 | orchestrator | Wednesday 04 February 2026 01:16:33 +0000 (0:00:00.735) 0:10:05.805 **** 2026-02-04 01:16:36.491144 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:16:36.491147 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:16:36.491150 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:16:36.491153 | orchestrator | 2026-02-04 01:16:36.491157 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:16:36.491160 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:16:36.491164 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-04 01:16:36.491167 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-04 01:16:36.491170 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-04 01:16:36.491175 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-04 01:16:36.491178 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-04 01:16:36.491181 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-04 01:16:36.491184 | orchestrator | 2026-02-04 01:16:36.491188 | orchestrator | 2026-02-04 01:16:36.491191 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:16:36.491194 | orchestrator | Wednesday 04 February 2026 01:16:34 +0000 (0:00:00.471) 0:10:06.277 **** 2026-02-04 01:16:36.491197 | orchestrator | =============================================================================== 2026-02-04 01:16:36.491200 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 37.10s 2026-02-04 01:16:36.491203 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.52s 2026-02-04 01:16:36.491206 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 31.42s 2026-02-04 01:16:36.491209 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.30s 2026-02-04 01:16:36.491213 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.44s 2026-02-04 01:16:36.491216 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.50s 2026-02-04 01:16:36.491219 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.78s 2026-02-04 01:16:36.491222 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.10s 2026-02-04 01:16:36.491225 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 18.24s 2026-02-04 01:16:36.491228 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.99s 2026-02-04 01:16:36.491231 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.79s 2026-02-04 01:16:36.491236 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.87s 2026-02-04 01:16:36.491239 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.75s 2026-02-04 01:16:36.491242 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.71s 2026-02-04 01:16:36.491245 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.47s 2026-02-04 01:16:36.491249 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.04s 2026-02-04 01:16:36.491254 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.74s 2026-02-04 01:16:36.491259 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 11.73s 2026-02-04 01:16:36.491262 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.31s 2026-02-04 01:16:36.491265 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.49s 2026-02-04 01:16:36.491269 | orchestrator | 2026-02-04 01:16:36 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:36.491272 | orchestrator | 2026-02-04 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:39.531056 | orchestrator | 2026-02-04 01:16:39 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:39.531609 | orchestrator | 2026-02-04 01:16:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:42.574068 | orchestrator | 2026-02-04 01:16:42 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:42.574136 | orchestrator | 2026-02-04 01:16:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:45.620751 | orchestrator | 2026-02-04 01:16:45 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:45.621740 | orchestrator | 2026-02-04 01:16:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:48.663505 | orchestrator | 2026-02-04 01:16:48 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:48.663594 | orchestrator | 2026-02-04 01:16:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:51.710493 | orchestrator | 2026-02-04 01:16:51 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:51.710545 | orchestrator | 2026-02-04 01:16:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:54.764779 | orchestrator | 2026-02-04 01:16:54 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:54.764872 | orchestrator | 2026-02-04 01:16:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:16:57.808167 | orchestrator | 2026-02-04 01:16:57 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:16:57.808256 | orchestrator | 2026-02-04 01:16:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:00.849745 | orchestrator | 2026-02-04 01:17:00 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:00.849812 | orchestrator | 2026-02-04 01:17:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:03.888349 | orchestrator | 2026-02-04 01:17:03 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:03.888432 | orchestrator | 2026-02-04 01:17:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:06.928524 | orchestrator | 2026-02-04 01:17:06 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:06.928571 | orchestrator | 2026-02-04 01:17:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:09.973369 | orchestrator | 2026-02-04 01:17:09 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:09.973421 | orchestrator | 2026-02-04 01:17:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:13.025635 | orchestrator | 2026-02-04 01:17:13 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:13.025716 | orchestrator | 2026-02-04 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:16.073174 | orchestrator | 2026-02-04 01:17:16 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:16.073327 | orchestrator | 2026-02-04 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:19.114620 | orchestrator | 2026-02-04 01:17:19 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:19.114700 | orchestrator | 2026-02-04 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:22.160015 | orchestrator | 2026-02-04 01:17:22 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:22.160118 | orchestrator | 2026-02-04 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:25.205823 | orchestrator | 2026-02-04 01:17:25 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:25.205964 | orchestrator | 2026-02-04 01:17:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:28.250354 | orchestrator | 2026-02-04 01:17:28 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:28.250434 | orchestrator | 2026-02-04 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:31.293561 | orchestrator | 2026-02-04 01:17:31 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:31.293634 | orchestrator | 2026-02-04 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:34.334418 | orchestrator | 2026-02-04 01:17:34 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:34.334517 | orchestrator | 2026-02-04 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:37.388314 | orchestrator | 2026-02-04 01:17:37 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:37.388369 | orchestrator | 2026-02-04 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:40.444270 | orchestrator | 2026-02-04 01:17:40 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:40.444343 | orchestrator | 2026-02-04 01:17:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:43.485088 | orchestrator | 2026-02-04 01:17:43 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:43.485181 | orchestrator | 2026-02-04 01:17:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:46.519718 | orchestrator | 2026-02-04 01:17:46 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:46.519801 | orchestrator | 2026-02-04 01:17:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:49.556449 | orchestrator | 2026-02-04 01:17:49 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:49.556543 | orchestrator | 2026-02-04 01:17:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:52.593413 | orchestrator | 2026-02-04 01:17:52 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:52.593471 | orchestrator | 2026-02-04 01:17:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:55.638686 | orchestrator | 2026-02-04 01:17:55 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:55.638745 | orchestrator | 2026-02-04 01:17:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:17:58.693272 | orchestrator | 2026-02-04 01:17:58 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:17:58.693357 | orchestrator | 2026-02-04 01:17:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:01.730059 | orchestrator | 2026-02-04 01:18:01 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:01.730135 | orchestrator | 2026-02-04 01:18:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:04.777098 | orchestrator | 2026-02-04 01:18:04 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:04.778111 | orchestrator | 2026-02-04 01:18:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:07.823788 | orchestrator | 2026-02-04 01:18:07 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:07.823874 | orchestrator | 2026-02-04 01:18:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:10.878076 | orchestrator | 2026-02-04 01:18:10 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:10.878166 | orchestrator | 2026-02-04 01:18:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:13.925313 | orchestrator | 2026-02-04 01:18:13 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:13.925418 | orchestrator | 2026-02-04 01:18:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:16.981927 | orchestrator | 2026-02-04 01:18:16 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:16.982061 | orchestrator | 2026-02-04 01:18:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:20.028405 | orchestrator | 2026-02-04 01:18:20 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:20.028511 | orchestrator | 2026-02-04 01:18:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:23.072425 | orchestrator | 2026-02-04 01:18:23 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:23.072492 | orchestrator | 2026-02-04 01:18:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:26.112633 | orchestrator | 2026-02-04 01:18:26 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:26.112730 | orchestrator | 2026-02-04 01:18:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:29.150899 | orchestrator | 2026-02-04 01:18:29 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:29.150970 | orchestrator | 2026-02-04 01:18:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:32.209826 | orchestrator | 2026-02-04 01:18:32 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:32.209924 | orchestrator | 2026-02-04 01:18:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:35.256327 | orchestrator | 2026-02-04 01:18:35 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:35.256419 | orchestrator | 2026-02-04 01:18:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:38.326723 | orchestrator | 2026-02-04 01:18:38 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:38.326804 | orchestrator | 2026-02-04 01:18:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:41.376136 | orchestrator | 2026-02-04 01:18:41 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:41.376204 | orchestrator | 2026-02-04 01:18:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:44.421972 | orchestrator | 2026-02-04 01:18:44 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:44.422107 | orchestrator | 2026-02-04 01:18:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:47.471897 | orchestrator | 2026-02-04 01:18:47 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:47.471992 | orchestrator | 2026-02-04 01:18:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:50.517073 | orchestrator | 2026-02-04 01:18:50 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:50.517143 | orchestrator | 2026-02-04 01:18:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:53.570253 | orchestrator | 2026-02-04 01:18:53 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:53.570329 | orchestrator | 2026-02-04 01:18:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:56.615469 | orchestrator | 2026-02-04 01:18:56 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:56.615541 | orchestrator | 2026-02-04 01:18:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:18:59.655943 | orchestrator | 2026-02-04 01:18:59 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:18:59.656013 | orchestrator | 2026-02-04 01:18:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:19:02.705342 | orchestrator | 2026-02-04 01:19:02 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:19:02.705431 | orchestrator | 2026-02-04 01:19:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:19:05.753480 | orchestrator | 2026-02-04 01:19:05 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:19:05.753562 | orchestrator | 2026-02-04 01:19:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:19:08.799629 | orchestrator | 2026-02-04 01:19:08 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:19:08.799697 | orchestrator | 2026-02-04 01:19:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:19:11.854485 | orchestrator | 2026-02-04 01:19:11 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:19:11.854607 | orchestrator | 2026-02-04 01:19:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:19:14.904876 | orchestrator | 2026-02-04 01:19:14 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:19:14.904965 | orchestrator | 2026-02-04 01:19:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:19:17.951226 | orchestrator | 2026-02-04 01:19:17 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:19:17.951299 | orchestrator | 2026-02-04 01:19:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:19:21.007143 | orchestrator | 2026-02-04 01:19:21 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:19:21.007236 | orchestrator | 2026-02-04 01:19:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:19:24.056373 | orchestrator | 2026-02-04 01:19:24 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state STARTED 2026-02-04 01:19:24.056436 | orchestrator | 2026-02-04 01:19:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:19:27.096867 | orchestrator | 2026-02-04 01:19:27 | INFO  | Task 89f3b89c-853a-4e8d-8e9c-e7afb1d8e9fd is in state SUCCESS 2026-02-04 01:19:27.098541 | orchestrator | 2026-02-04 01:19:27.098597 | orchestrator | 2026-02-04 01:19:27.098603 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:19:27.098609 | orchestrator | 2026-02-04 01:19:27.098613 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:19:27.098618 | orchestrator | Wednesday 04 February 2026 01:14:38 +0000 (0:00:00.327) 0:00:00.327 **** 2026-02-04 01:19:27.098622 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:19:27.098649 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:19:27.098653 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:19:27.098657 | orchestrator | 2026-02-04 01:19:27.098662 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:19:27.098666 | orchestrator | Wednesday 04 February 2026 01:14:38 +0000 (0:00:00.364) 0:00:00.692 **** 2026-02-04 01:19:27.098670 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-04 01:19:27.098674 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-04 01:19:27.098678 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-04 01:19:27.098682 | orchestrator | 2026-02-04 01:19:27.098686 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-04 01:19:27.098690 | orchestrator | 2026-02-04 01:19:27.098694 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:19:27.098698 | orchestrator | Wednesday 04 February 2026 01:14:39 +0000 (0:00:00.569) 0:00:01.261 **** 2026-02-04 01:19:27.098702 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:19:27.098707 | orchestrator | 2026-02-04 01:19:27.098711 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-04 01:19:27.098715 | orchestrator | Wednesday 04 February 2026 01:14:39 +0000 (0:00:00.639) 0:00:01.901 **** 2026-02-04 01:19:27.098719 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-04 01:19:27.098723 | orchestrator | 2026-02-04 01:19:27.098727 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-04 01:19:27.098731 | orchestrator | Wednesday 04 February 2026 01:14:43 +0000 (0:00:03.523) 0:00:05.425 **** 2026-02-04 01:19:27.098735 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-04 01:19:27.098739 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-04 01:19:27.098743 | orchestrator | 2026-02-04 01:19:27.098747 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-04 01:19:27.098751 | orchestrator | Wednesday 04 February 2026 01:14:49 +0000 (0:00:05.956) 0:00:11.382 **** 2026-02-04 01:19:27.098755 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:19:27.098759 | orchestrator | 2026-02-04 01:19:27.098763 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-04 01:19:27.098767 | orchestrator | Wednesday 04 February 2026 01:14:52 +0000 (0:00:03.059) 0:00:14.442 **** 2026-02-04 01:19:27.098770 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-04 01:19:27.098775 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-04 01:19:27.098778 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:19:27.098782 | orchestrator | 2026-02-04 01:19:27.098786 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-04 01:19:27.098790 | orchestrator | Wednesday 04 February 2026 01:14:59 +0000 (0:00:06.920) 0:00:21.362 **** 2026-02-04 01:19:27.098794 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:19:27.098822 | orchestrator | 2026-02-04 01:19:27.098827 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-04 01:19:27.098831 | orchestrator | Wednesday 04 February 2026 01:15:02 +0000 (0:00:03.221) 0:00:24.583 **** 2026-02-04 01:19:27.098834 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-04 01:19:27.098838 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-04 01:19:27.098842 | orchestrator | 2026-02-04 01:19:27.098846 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-04 01:19:27.098850 | orchestrator | Wednesday 04 February 2026 01:15:09 +0000 (0:00:07.032) 0:00:31.615 **** 2026-02-04 01:19:27.098853 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-04 01:19:27.098857 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-04 01:19:27.098866 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-04 01:19:27.098870 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-04 01:19:27.098874 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-04 01:19:27.098878 | orchestrator | 2026-02-04 01:19:27.098882 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:19:27.098886 | orchestrator | Wednesday 04 February 2026 01:15:24 +0000 (0:00:15.053) 0:00:46.669 **** 2026-02-04 01:19:27.098899 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:19:27.098903 | orchestrator | 2026-02-04 01:19:27.098907 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-04 01:19:27.098911 | orchestrator | Wednesday 04 February 2026 01:15:25 +0000 (0:00:00.592) 0:00:47.261 **** 2026-02-04 01:19:27.098916 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.098922 | orchestrator | 2026-02-04 01:19:27.098928 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-04 01:19:27.098935 | orchestrator | Wednesday 04 February 2026 01:15:29 +0000 (0:00:04.742) 0:00:52.004 **** 2026-02-04 01:19:27.098944 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.098950 | orchestrator | 2026-02-04 01:19:27.098956 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-04 01:19:27.099109 | orchestrator | Wednesday 04 February 2026 01:15:34 +0000 (0:00:04.604) 0:00:56.609 **** 2026-02-04 01:19:27.099119 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:19:27.099125 | orchestrator | 2026-02-04 01:19:27.099132 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-04 01:19:27.099139 | orchestrator | Wednesday 04 February 2026 01:15:37 +0000 (0:00:03.288) 0:00:59.897 **** 2026-02-04 01:19:27.099145 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-04 01:19:27.099152 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-04 01:19:27.099158 | orchestrator | 2026-02-04 01:19:27.099165 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-04 01:19:27.099171 | orchestrator | Wednesday 04 February 2026 01:15:49 +0000 (0:00:11.371) 0:01:11.268 **** 2026-02-04 01:19:27.099178 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-04 01:19:27.099184 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-04 01:19:27.099193 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-04 01:19:27.099201 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-04 01:19:27.099208 | orchestrator | 2026-02-04 01:19:27.099214 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-04 01:19:27.099221 | orchestrator | Wednesday 04 February 2026 01:16:04 +0000 (0:00:15.483) 0:01:26.752 **** 2026-02-04 01:19:27.099227 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.099234 | orchestrator | 2026-02-04 01:19:27.099240 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-04 01:19:27.099247 | orchestrator | Wednesday 04 February 2026 01:16:09 +0000 (0:00:04.400) 0:01:31.152 **** 2026-02-04 01:19:27.099253 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.099260 | orchestrator | 2026-02-04 01:19:27.099267 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-04 01:19:27.099277 | orchestrator | Wednesday 04 February 2026 01:16:14 +0000 (0:00:05.325) 0:01:36.477 **** 2026-02-04 01:19:27.099283 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:19:27.099289 | orchestrator | 2026-02-04 01:19:27.099296 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-04 01:19:27.099311 | orchestrator | Wednesday 04 February 2026 01:16:14 +0000 (0:00:00.265) 0:01:36.743 **** 2026-02-04 01:19:27.099319 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:19:27.099325 | orchestrator | 2026-02-04 01:19:27.099332 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:19:27.099340 | orchestrator | Wednesday 04 February 2026 01:16:18 +0000 (0:00:04.048) 0:01:40.792 **** 2026-02-04 01:19:27.099345 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:19:27.099350 | orchestrator | 2026-02-04 01:19:27.099355 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-04 01:19:27.099359 | orchestrator | Wednesday 04 February 2026 01:16:19 +0000 (0:00:01.074) 0:01:41.866 **** 2026-02-04 01:19:27.099363 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.099366 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.099370 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.099374 | orchestrator | 2026-02-04 01:19:27.099378 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-04 01:19:27.099382 | orchestrator | Wednesday 04 February 2026 01:16:25 +0000 (0:00:05.349) 0:01:47.216 **** 2026-02-04 01:19:27.099386 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.099390 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.099393 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.099397 | orchestrator | 2026-02-04 01:19:27.099401 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-04 01:19:27.099405 | orchestrator | Wednesday 04 February 2026 01:16:29 +0000 (0:00:04.642) 0:01:51.858 **** 2026-02-04 01:19:27.099409 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.099412 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.099416 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.099420 | orchestrator | 2026-02-04 01:19:27.099424 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-04 01:19:27.099428 | orchestrator | Wednesday 04 February 2026 01:16:30 +0000 (0:00:00.819) 0:01:52.677 **** 2026-02-04 01:19:27.099431 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:19:27.099435 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:19:27.099439 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:19:27.099443 | orchestrator | 2026-02-04 01:19:27.099447 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-04 01:19:27.099450 | orchestrator | Wednesday 04 February 2026 01:16:32 +0000 (0:00:02.058) 0:01:54.736 **** 2026-02-04 01:19:27.099454 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.099464 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.099468 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.099472 | orchestrator | 2026-02-04 01:19:27.099476 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-04 01:19:27.099479 | orchestrator | Wednesday 04 February 2026 01:16:34 +0000 (0:00:01.355) 0:01:56.091 **** 2026-02-04 01:19:27.099483 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.099502 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.099506 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.099510 | orchestrator | 2026-02-04 01:19:27.099514 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-04 01:19:27.099518 | orchestrator | Wednesday 04 February 2026 01:16:35 +0000 (0:00:01.080) 0:01:57.171 **** 2026-02-04 01:19:27.099522 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.099526 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.099529 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.099533 | orchestrator | 2026-02-04 01:19:27.099544 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-04 01:19:27.099548 | orchestrator | Wednesday 04 February 2026 01:16:37 +0000 (0:00:01.932) 0:01:59.103 **** 2026-02-04 01:19:27.099552 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.099560 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.099563 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.099567 | orchestrator | 2026-02-04 01:19:27.099571 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-04 01:19:27.099575 | orchestrator | Wednesday 04 February 2026 01:16:38 +0000 (0:00:01.656) 0:02:00.760 **** 2026-02-04 01:19:27.099578 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:19:27.099582 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:19:27.099586 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:19:27.099590 | orchestrator | 2026-02-04 01:19:27.099594 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-04 01:19:27.099598 | orchestrator | Wednesday 04 February 2026 01:16:39 +0000 (0:00:00.608) 0:02:01.368 **** 2026-02-04 01:19:27.099601 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:19:27.099605 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:19:27.099609 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:19:27.099613 | orchestrator | 2026-02-04 01:19:27.099617 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:19:27.099621 | orchestrator | Wednesday 04 February 2026 01:16:41 +0000 (0:00:02.468) 0:02:03.837 **** 2026-02-04 01:19:27.099625 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:19:27.099629 | orchestrator | 2026-02-04 01:19:27.099633 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-04 01:19:27.099636 | orchestrator | Wednesday 04 February 2026 01:16:42 +0000 (0:00:00.853) 0:02:04.691 **** 2026-02-04 01:19:27.099640 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:19:27.099644 | orchestrator | 2026-02-04 01:19:27.099648 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-04 01:19:27.099652 | orchestrator | Wednesday 04 February 2026 01:16:46 +0000 (0:00:04.261) 0:02:08.952 **** 2026-02-04 01:19:27.099656 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:19:27.099659 | orchestrator | 2026-02-04 01:19:27.099663 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-04 01:19:27.099667 | orchestrator | Wednesday 04 February 2026 01:16:50 +0000 (0:00:03.671) 0:02:12.624 **** 2026-02-04 01:19:27.099671 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-04 01:19:27.099675 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-04 01:19:27.099679 | orchestrator | 2026-02-04 01:19:27.099683 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-04 01:19:27.099686 | orchestrator | Wednesday 04 February 2026 01:16:57 +0000 (0:00:07.449) 0:02:20.074 **** 2026-02-04 01:19:27.099690 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:19:27.099694 | orchestrator | 2026-02-04 01:19:27.099698 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-04 01:19:27.099702 | orchestrator | Wednesday 04 February 2026 01:17:01 +0000 (0:00:03.734) 0:02:23.808 **** 2026-02-04 01:19:27.099706 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:19:27.099709 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:19:27.099713 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:19:27.099717 | orchestrator | 2026-02-04 01:19:27.099721 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-04 01:19:27.099725 | orchestrator | Wednesday 04 February 2026 01:17:02 +0000 (0:00:00.406) 0:02:24.215 **** 2026-02-04 01:19:27.099732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.099751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.099755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.099761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.099766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.099770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.099775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.099787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.099795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.099823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.099828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.099832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.099836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.099844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.099850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.099855 | orchestrator | 2026-02-04 01:19:27.099858 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-04 01:19:27.099862 | orchestrator | Wednesday 04 February 2026 01:17:04 +0000 (0:00:02.595) 0:02:26.811 **** 2026-02-04 01:19:27.099866 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:19:27.099871 | orchestrator | 2026-02-04 01:19:27.099877 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-04 01:19:27.099881 | orchestrator | Wednesday 04 February 2026 01:17:04 +0000 (0:00:00.153) 0:02:26.964 **** 2026-02-04 01:19:27.099885 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:19:27.099889 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:19:27.099892 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:19:27.099896 | orchestrator | 2026-02-04 01:19:27.099900 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-04 01:19:27.099904 | orchestrator | Wednesday 04 February 2026 01:17:05 +0000 (0:00:00.574) 0:02:27.538 **** 2026-02-04 01:19:27.099908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:19:27.099913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:19:27.099917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.099924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.099930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:19:27.099935 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:19:27.099944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:19:27.099948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:19:27.099952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.099956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.099968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:19:27.099972 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:19:27.099978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:19:27.099988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:19:27.099992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.099996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:19:27.100008 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:19:27.100012 | orchestrator | 2026-02-04 01:19:27.100016 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:19:27.100020 | orchestrator | Wednesday 04 February 2026 01:17:06 +0000 (0:00:00.767) 0:02:28.306 **** 2026-02-04 01:19:27.100024 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:19:27.100028 | orchestrator | 2026-02-04 01:19:27.100032 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-04 01:19:27.100036 | orchestrator | Wednesday 04 February 2026 01:17:06 +0000 (0:00:00.578) 0:02:28.885 **** 2026-02-04 01:19:27.100042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.100467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.100491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.100495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.100507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.100512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.100516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100574 | orchestrator | 2026-02-04 01:19:27.100578 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-04 01:19:27.100583 | orchestrator | Wednesday 04 February 2026 01:17:11 +0000 (0:00:05.144) 0:02:34.029 **** 2026-02-04 01:19:27.100587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:19:27.100595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:19:27.100599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:19:27.100613 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:19:27.100621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:19:27.100627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:19:27.100639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:19:27.100663 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:19:27.100673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:19:27.100680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:19:27.100691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:19:27.100718 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:19:27.100724 | orchestrator | 2026-02-04 01:19:27.100730 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-04 01:19:27.100737 | orchestrator | Wednesday 04 February 2026 01:17:12 +0000 (0:00:00.766) 0:02:34.796 **** 2026-02-04 01:19:27.100743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:19:27.100750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:19:27.100758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:19:27.100822 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:19:27.100826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:19:27.100830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:19:27.100835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:19:27.100861 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:19:27.100865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:19:27.100869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:19:27.100873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:19:27.100884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:19:27.100888 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:19:27.100892 | orchestrator | 2026-02-04 01:19:27.100896 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-04 01:19:27.100900 | orchestrator | Wednesday 04 February 2026 01:17:14 +0000 (0:00:01.327) 0:02:36.124 **** 2026-02-04 01:19:27.100911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.100916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.100920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.100924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.100931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.100935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.100946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.100990 | orchestrator | 2026-02-04 01:19:27.100994 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-04 01:19:27.100998 | orchestrator | Wednesday 04 February 2026 01:17:19 +0000 (0:00:05.686) 0:02:41.810 **** 2026-02-04 01:19:27.101002 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-04 01:19:27.101007 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-04 01:19:27.101011 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-04 01:19:27.101015 | orchestrator | 2026-02-04 01:19:27.101018 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-04 01:19:27.101024 | orchestrator | Wednesday 04 February 2026 01:17:21 +0000 (0:00:02.057) 0:02:43.868 **** 2026-02-04 01:19:27.101031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.101041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.101057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.101064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.101071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.101078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.101085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101143 | orchestrator | 2026-02-04 01:19:27.101147 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-04 01:19:27.101151 | orchestrator | Wednesday 04 February 2026 01:17:39 +0000 (0:00:18.057) 0:03:01.925 **** 2026-02-04 01:19:27.101155 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.101159 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.101163 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.101167 | orchestrator | 2026-02-04 01:19:27.101171 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-04 01:19:27.101174 | orchestrator | Wednesday 04 February 2026 01:17:41 +0000 (0:00:01.753) 0:03:03.679 **** 2026-02-04 01:19:27.101178 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-04 01:19:27.101182 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-04 01:19:27.101189 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-04 01:19:27.101193 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-04 01:19:27.101196 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-04 01:19:27.101200 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-04 01:19:27.101204 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-04 01:19:27.101208 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-04 01:19:27.101211 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-04 01:19:27.101215 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-04 01:19:27.101219 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-04 01:19:27.101223 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-04 01:19:27.101227 | orchestrator | 2026-02-04 01:19:27.101230 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-04 01:19:27.101234 | orchestrator | Wednesday 04 February 2026 01:17:46 +0000 (0:00:05.362) 0:03:09.041 **** 2026-02-04 01:19:27.101238 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-04 01:19:27.101242 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-04 01:19:27.101246 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-04 01:19:27.101252 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-04 01:19:27.101258 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-04 01:19:27.101264 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-04 01:19:27.101270 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-04 01:19:27.101276 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-04 01:19:27.101281 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-04 01:19:27.101288 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-04 01:19:27.101293 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-04 01:19:27.101300 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-04 01:19:27.101306 | orchestrator | 2026-02-04 01:19:27.101316 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-04 01:19:27.101323 | orchestrator | Wednesday 04 February 2026 01:17:53 +0000 (0:00:06.041) 0:03:15.082 **** 2026-02-04 01:19:27.101329 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-04 01:19:27.101335 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-04 01:19:27.101341 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-04 01:19:27.101347 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-04 01:19:27.101353 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-04 01:19:27.101359 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-04 01:19:27.101365 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-04 01:19:27.101369 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-04 01:19:27.101373 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-04 01:19:27.101376 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-04 01:19:27.101380 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-04 01:19:27.101384 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-04 01:19:27.101388 | orchestrator | 2026-02-04 01:19:27.101392 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-04 01:19:27.101395 | orchestrator | Wednesday 04 February 2026 01:17:58 +0000 (0:00:05.563) 0:03:20.646 **** 2026-02-04 01:19:27.101404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.101413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.101417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:19:27.101427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.101435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.101444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:19:27.101463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:19:27.101540 | orchestrator | 2026-02-04 01:19:27.101547 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:19:27.101553 | orchestrator | Wednesday 04 February 2026 01:18:02 +0000 (0:00:04.153) 0:03:24.799 **** 2026-02-04 01:19:27.101559 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:19:27.101566 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:19:27.101570 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:19:27.101574 | orchestrator | 2026-02-04 01:19:27.101582 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-04 01:19:27.101586 | orchestrator | Wednesday 04 February 2026 01:18:03 +0000 (0:00:00.390) 0:03:25.190 **** 2026-02-04 01:19:27.101589 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.101593 | orchestrator | 2026-02-04 01:19:27.101597 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-04 01:19:27.101601 | orchestrator | Wednesday 04 February 2026 01:18:05 +0000 (0:00:02.314) 0:03:27.505 **** 2026-02-04 01:19:27.101605 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.101608 | orchestrator | 2026-02-04 01:19:27.101612 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-04 01:19:27.101616 | orchestrator | Wednesday 04 February 2026 01:18:07 +0000 (0:00:02.302) 0:03:29.808 **** 2026-02-04 01:19:27.101620 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.101624 | orchestrator | 2026-02-04 01:19:27.101628 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-04 01:19:27.101632 | orchestrator | Wednesday 04 February 2026 01:18:10 +0000 (0:00:02.352) 0:03:32.160 **** 2026-02-04 01:19:27.101635 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.101639 | orchestrator | 2026-02-04 01:19:27.101643 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-04 01:19:27.101647 | orchestrator | Wednesday 04 February 2026 01:18:13 +0000 (0:00:03.111) 0:03:35.272 **** 2026-02-04 01:19:27.101651 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.101654 | orchestrator | 2026-02-04 01:19:27.101658 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-04 01:19:27.101662 | orchestrator | Wednesday 04 February 2026 01:18:34 +0000 (0:00:21.758) 0:03:57.031 **** 2026-02-04 01:19:27.101666 | orchestrator | 2026-02-04 01:19:27.101670 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-04 01:19:27.101674 | orchestrator | Wednesday 04 February 2026 01:18:35 +0000 (0:00:00.081) 0:03:57.112 **** 2026-02-04 01:19:27.101678 | orchestrator | 2026-02-04 01:19:27.101681 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-04 01:19:27.101685 | orchestrator | Wednesday 04 February 2026 01:18:35 +0000 (0:00:00.073) 0:03:57.186 **** 2026-02-04 01:19:27.101689 | orchestrator | 2026-02-04 01:19:27.101693 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-04 01:19:27.101697 | orchestrator | Wednesday 04 February 2026 01:18:35 +0000 (0:00:00.079) 0:03:57.266 **** 2026-02-04 01:19:27.101700 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.101704 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.101708 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.101712 | orchestrator | 2026-02-04 01:19:27.101716 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-04 01:19:27.101719 | orchestrator | Wednesday 04 February 2026 01:18:50 +0000 (0:00:15.664) 0:04:12.930 **** 2026-02-04 01:19:27.101723 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.101727 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.101731 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.101734 | orchestrator | 2026-02-04 01:19:27.101738 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-04 01:19:27.101742 | orchestrator | Wednesday 04 February 2026 01:18:57 +0000 (0:00:07.057) 0:04:19.987 **** 2026-02-04 01:19:27.101746 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.101750 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.101754 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.101758 | orchestrator | 2026-02-04 01:19:27.101762 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-04 01:19:27.101765 | orchestrator | Wednesday 04 February 2026 01:19:09 +0000 (0:00:11.150) 0:04:31.137 **** 2026-02-04 01:19:27.101769 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.101773 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.101777 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.101784 | orchestrator | 2026-02-04 01:19:27.101787 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-04 01:19:27.101791 | orchestrator | Wednesday 04 February 2026 01:19:14 +0000 (0:00:05.677) 0:04:36.815 **** 2026-02-04 01:19:27.101795 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:19:27.101878 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:19:27.101883 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:19:27.101887 | orchestrator | 2026-02-04 01:19:27.101891 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:19:27.101898 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:19:27.101903 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:19:27.101907 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:19:27.101911 | orchestrator | 2026-02-04 01:19:27.101915 | orchestrator | 2026-02-04 01:19:27.101919 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:19:27.101923 | orchestrator | Wednesday 04 February 2026 01:19:25 +0000 (0:00:10.293) 0:04:47.108 **** 2026-02-04 01:19:27.101930 | orchestrator | =============================================================================== 2026-02-04 01:19:27.101934 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.76s 2026-02-04 01:19:27.101938 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.06s 2026-02-04 01:19:27.101942 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.66s 2026-02-04 01:19:27.101946 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.48s 2026-02-04 01:19:27.101949 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.05s 2026-02-04 01:19:27.101953 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.37s 2026-02-04 01:19:27.101957 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 11.15s 2026-02-04 01:19:27.101961 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.29s 2026-02-04 01:19:27.101964 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.45s 2026-02-04 01:19:27.101968 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.06s 2026-02-04 01:19:27.101972 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.03s 2026-02-04 01:19:27.101976 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 6.92s 2026-02-04 01:19:27.101980 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.04s 2026-02-04 01:19:27.101984 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.96s 2026-02-04 01:19:27.101987 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.69s 2026-02-04 01:19:27.101991 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.68s 2026-02-04 01:19:27.101995 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.56s 2026-02-04 01:19:27.101999 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.36s 2026-02-04 01:19:27.102003 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.35s 2026-02-04 01:19:27.102007 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.33s 2026-02-04 01:19:27.102011 | orchestrator | 2026-02-04 01:19:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:19:30.137973 | orchestrator | 2026-02-04 01:19:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:19:33.169640 | orchestrator | 2026-02-04 01:19:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:19:36.208190 | orchestrator | 2026-02-04 01:19:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:19:39.249193 | orchestrator | 2026-02-04 01:19:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:19:42.287251 | orchestrator | 2026-02-04 01:19:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:19:45.339019 | orchestrator | 2026-02-04 01:19:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:19:48.387127 | orchestrator | 2026-02-04 01:19:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:19:51.433128 | orchestrator | 2026-02-04 01:19:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:19:54.478413 | orchestrator | 2026-02-04 01:19:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:19:57.523197 | orchestrator | 2026-02-04 01:19:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:20:00.559189 | orchestrator | 2026-02-04 01:20:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:20:03.605158 | orchestrator | 2026-02-04 01:20:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:20:06.653365 | orchestrator | 2026-02-04 01:20:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:20:09.703480 | orchestrator | 2026-02-04 01:20:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:20:12.743582 | orchestrator | 2026-02-04 01:20:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:20:15.783161 | orchestrator | 2026-02-04 01:20:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:20:18.832533 | orchestrator | 2026-02-04 01:20:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:20:21.875414 | orchestrator | 2026-02-04 01:20:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:20:24.923824 | orchestrator | 2026-02-04 01:20:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:20:27.976515 | orchestrator | 2026-02-04 01:20:28.414613 | orchestrator | 2026-02-04 01:20:28.421143 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Feb 4 01:20:28 UTC 2026 2026-02-04 01:20:28.422304 | orchestrator | 2026-02-04 01:20:28.929591 | orchestrator | ok: Runtime: 0:37:44.629605 2026-02-04 01:20:29.232403 | 2026-02-04 01:20:29.232588 | TASK [Bootstrap services] 2026-02-04 01:20:29.990212 | orchestrator | 2026-02-04 01:20:29.990392 | orchestrator | # BOOTSTRAP 2026-02-04 01:20:29.990410 | orchestrator | 2026-02-04 01:20:29.990417 | orchestrator | + set -e 2026-02-04 01:20:29.990424 | orchestrator | + echo 2026-02-04 01:20:29.990435 | orchestrator | + echo '# BOOTSTRAP' 2026-02-04 01:20:29.990448 | orchestrator | + echo 2026-02-04 01:20:29.990483 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-04 01:20:29.999247 | orchestrator | + set -e 2026-02-04 01:20:29.999326 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-04 01:20:35.700970 | orchestrator | 2026-02-04 01:20:35 | INFO  | It takes a moment until task cacd561f-1c85-4b6b-97fb-b7a92fc3060b (flavor-manager) has been started and output is visible here. 2026-02-04 01:20:43.907320 | orchestrator | 2026-02-04 01:20:39 | INFO  | Flavor SCS-1L-1 created 2026-02-04 01:20:43.907490 | orchestrator | 2026-02-04 01:20:39 | INFO  | Flavor SCS-1L-1-5 created 2026-02-04 01:20:43.907507 | orchestrator | 2026-02-04 01:20:39 | INFO  | Flavor SCS-1V-2 created 2026-02-04 01:20:43.907515 | orchestrator | 2026-02-04 01:20:40 | INFO  | Flavor SCS-1V-2-5 created 2026-02-04 01:20:43.907523 | orchestrator | 2026-02-04 01:20:40 | INFO  | Flavor SCS-1V-4 created 2026-02-04 01:20:43.907532 | orchestrator | 2026-02-04 01:20:40 | INFO  | Flavor SCS-1V-4-10 created 2026-02-04 01:20:43.907540 | orchestrator | 2026-02-04 01:20:40 | INFO  | Flavor SCS-1V-8 created 2026-02-04 01:20:43.907548 | orchestrator | 2026-02-04 01:20:41 | INFO  | Flavor SCS-1V-8-20 created 2026-02-04 01:20:43.907569 | orchestrator | 2026-02-04 01:20:41 | INFO  | Flavor SCS-2V-4 created 2026-02-04 01:20:43.907577 | orchestrator | 2026-02-04 01:20:41 | INFO  | Flavor SCS-2V-4-10 created 2026-02-04 01:20:43.907585 | orchestrator | 2026-02-04 01:20:41 | INFO  | Flavor SCS-2V-8 created 2026-02-04 01:20:43.907593 | orchestrator | 2026-02-04 01:20:41 | INFO  | Flavor SCS-2V-8-20 created 2026-02-04 01:20:43.907601 | orchestrator | 2026-02-04 01:20:41 | INFO  | Flavor SCS-2V-16 created 2026-02-04 01:20:43.907609 | orchestrator | 2026-02-04 01:20:41 | INFO  | Flavor SCS-2V-16-50 created 2026-02-04 01:20:43.907617 | orchestrator | 2026-02-04 01:20:41 | INFO  | Flavor SCS-4V-8 created 2026-02-04 01:20:43.907625 | orchestrator | 2026-02-04 01:20:42 | INFO  | Flavor SCS-4V-8-20 created 2026-02-04 01:20:43.907632 | orchestrator | 2026-02-04 01:20:42 | INFO  | Flavor SCS-4V-16 created 2026-02-04 01:20:43.907640 | orchestrator | 2026-02-04 01:20:42 | INFO  | Flavor SCS-4V-16-50 created 2026-02-04 01:20:43.907648 | orchestrator | 2026-02-04 01:20:42 | INFO  | Flavor SCS-4V-32 created 2026-02-04 01:20:43.907656 | orchestrator | 2026-02-04 01:20:42 | INFO  | Flavor SCS-4V-32-100 created 2026-02-04 01:20:43.907663 | orchestrator | 2026-02-04 01:20:42 | INFO  | Flavor SCS-8V-16 created 2026-02-04 01:20:43.907671 | orchestrator | 2026-02-04 01:20:42 | INFO  | Flavor SCS-8V-16-50 created 2026-02-04 01:20:43.907679 | orchestrator | 2026-02-04 01:20:42 | INFO  | Flavor SCS-8V-32 created 2026-02-04 01:20:43.907687 | orchestrator | 2026-02-04 01:20:43 | INFO  | Flavor SCS-8V-32-100 created 2026-02-04 01:20:43.907695 | orchestrator | 2026-02-04 01:20:43 | INFO  | Flavor SCS-16V-32 created 2026-02-04 01:20:43.907703 | orchestrator | 2026-02-04 01:20:43 | INFO  | Flavor SCS-16V-32-100 created 2026-02-04 01:20:43.907711 | orchestrator | 2026-02-04 01:20:43 | INFO  | Flavor SCS-2V-4-20s created 2026-02-04 01:20:43.907719 | orchestrator | 2026-02-04 01:20:43 | INFO  | Flavor SCS-4V-8-50s created 2026-02-04 01:20:43.907726 | orchestrator | 2026-02-04 01:20:43 | INFO  | Flavor SCS-8V-32-100s created 2026-02-04 01:20:46.615642 | orchestrator | 2026-02-04 01:20:46 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-04 01:20:46.626772 | orchestrator | 2026-02-04 01:20:46 | INFO  | Prepare task for execution of bootstrap-basic. 2026-02-04 01:20:46.695008 | orchestrator | 2026-02-04 01:20:46 | INFO  | Task c7b9a03e-4a2b-49ac-9768-4647c652a402 (bootstrap-basic) was prepared for execution. 2026-02-04 01:20:46.695085 | orchestrator | 2026-02-04 01:20:46 | INFO  | It takes a moment until task c7b9a03e-4a2b-49ac-9768-4647c652a402 (bootstrap-basic) has been started and output is visible here. 2026-02-04 01:21:40.028617 | orchestrator | 2026-02-04 01:21:40.028799 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-04 01:21:40.028817 | orchestrator | 2026-02-04 01:21:40.028825 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 01:21:40.028834 | orchestrator | Wednesday 04 February 2026 01:20:52 +0000 (0:00:00.092) 0:00:00.092 **** 2026-02-04 01:21:40.028844 | orchestrator | ok: [localhost] 2026-02-04 01:21:40.028854 | orchestrator | 2026-02-04 01:21:40.028862 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-04 01:21:40.028872 | orchestrator | Wednesday 04 February 2026 01:20:54 +0000 (0:00:02.296) 0:00:02.388 **** 2026-02-04 01:21:40.028881 | orchestrator | ok: [localhost] 2026-02-04 01:21:40.028890 | orchestrator | 2026-02-04 01:21:40.028898 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-04 01:21:40.028906 | orchestrator | Wednesday 04 February 2026 01:21:06 +0000 (0:00:11.827) 0:00:14.216 **** 2026-02-04 01:21:40.028915 | orchestrator | changed: [localhost] 2026-02-04 01:21:40.028924 | orchestrator | 2026-02-04 01:21:40.028933 | orchestrator | TASK [Create public network] *************************************************** 2026-02-04 01:21:40.028941 | orchestrator | Wednesday 04 February 2026 01:21:14 +0000 (0:00:08.568) 0:00:22.784 **** 2026-02-04 01:21:40.028949 | orchestrator | changed: [localhost] 2026-02-04 01:21:40.028958 | orchestrator | 2026-02-04 01:21:40.028968 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-04 01:21:40.028981 | orchestrator | Wednesday 04 February 2026 01:21:20 +0000 (0:00:05.578) 0:00:28.362 **** 2026-02-04 01:21:40.028992 | orchestrator | changed: [localhost] 2026-02-04 01:21:40.029002 | orchestrator | 2026-02-04 01:21:40.029012 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-04 01:21:40.029023 | orchestrator | Wednesday 04 February 2026 01:21:27 +0000 (0:00:07.143) 0:00:35.506 **** 2026-02-04 01:21:40.029033 | orchestrator | changed: [localhost] 2026-02-04 01:21:40.029043 | orchestrator | 2026-02-04 01:21:40.029052 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-04 01:21:40.029063 | orchestrator | Wednesday 04 February 2026 01:21:32 +0000 (0:00:04.796) 0:00:40.303 **** 2026-02-04 01:21:40.029073 | orchestrator | changed: [localhost] 2026-02-04 01:21:40.029082 | orchestrator | 2026-02-04 01:21:40.029107 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-04 01:21:40.029119 | orchestrator | Wednesday 04 February 2026 01:21:36 +0000 (0:00:03.741) 0:00:44.044 **** 2026-02-04 01:21:40.029130 | orchestrator | ok: [localhost] 2026-02-04 01:21:40.029140 | orchestrator | 2026-02-04 01:21:40.029151 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:21:40.029161 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:21:40.029173 | orchestrator | 2026-02-04 01:21:40.029183 | orchestrator | 2026-02-04 01:21:40.029194 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:21:40.029204 | orchestrator | Wednesday 04 February 2026 01:21:39 +0000 (0:00:03.698) 0:00:47.743 **** 2026-02-04 01:21:40.029214 | orchestrator | =============================================================================== 2026-02-04 01:21:40.029224 | orchestrator | Get volume type LUKS --------------------------------------------------- 11.83s 2026-02-04 01:21:40.029234 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.57s 2026-02-04 01:21:40.029244 | orchestrator | Set public network to default ------------------------------------------- 7.14s 2026-02-04 01:21:40.029281 | orchestrator | Create public network --------------------------------------------------- 5.58s 2026-02-04 01:21:40.029292 | orchestrator | Create public subnet ---------------------------------------------------- 4.80s 2026-02-04 01:21:40.029301 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.74s 2026-02-04 01:21:40.029311 | orchestrator | Create manager role ----------------------------------------------------- 3.70s 2026-02-04 01:21:40.029322 | orchestrator | Gathering Facts --------------------------------------------------------- 2.30s 2026-02-04 01:21:42.730915 | orchestrator | 2026-02-04 01:21:42 | INFO  | It takes a moment until task ba7c1179-dcc1-4740-ba12-a6b2d8ef6e24 (image-manager) has been started and output is visible here. 2026-02-04 01:23:46.471987 | orchestrator | 2026-02-04 01:21:45 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-04 01:23:46.472060 | orchestrator | 2026-02-04 01:23:45 | ERROR  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 504 2026-02-04 01:23:46.472069 | orchestrator | 2026-02-04 01:23:45 | ERROR  | Skipping 'Cirros 0.6.2' due to HTTP status code 504 2026-02-04 01:23:46.472073 | orchestrator | 2026-02-04 01:23:46.472079 | orchestrator | ERROR: One or more errors occurred during the execution of the program, please check the output. 2026-02-04 01:23:46.966673 | orchestrator | ERROR 2026-02-04 01:23:46.966890 | orchestrator | { 2026-02-04 01:23:46.966932 | orchestrator | "delta": "0:03:17.244176", 2026-02-04 01:23:46.966957 | orchestrator | "end": "2026-02-04 01:23:46.864092", 2026-02-04 01:23:46.966978 | orchestrator | "msg": "non-zero return code", 2026-02-04 01:23:46.966998 | orchestrator | "rc": 1, 2026-02-04 01:23:46.967054 | orchestrator | "start": "2026-02-04 01:20:29.619916" 2026-02-04 01:23:46.967076 | orchestrator | } failure 2026-02-04 01:23:46.974179 | 2026-02-04 01:23:46.974274 | PLAY RECAP 2026-02-04 01:23:46.974325 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-02-04 01:23:46.974349 | 2026-02-04 01:23:47.173938 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-02-04 01:23:47.175204 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-04 01:23:47.908746 | 2026-02-04 01:23:47.908908 | PLAY [Post output play] 2026-02-04 01:23:47.924993 | 2026-02-04 01:23:47.925161 | LOOP [stage-output : Register sources] 2026-02-04 01:23:47.995717 | 2026-02-04 01:23:47.996086 | TASK [stage-output : Check sudo] 2026-02-04 01:23:48.854816 | orchestrator | sudo: a password is required 2026-02-04 01:23:49.036490 | orchestrator | ok: Runtime: 0:00:00.010650 2026-02-04 01:23:49.050627 | 2026-02-04 01:23:49.050788 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-04 01:23:49.101396 | 2026-02-04 01:23:49.101704 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-04 01:23:49.180490 | orchestrator | ok 2026-02-04 01:23:49.189894 | 2026-02-04 01:23:49.190122 | LOOP [stage-output : Ensure target folders exist] 2026-02-04 01:23:49.685703 | orchestrator | ok: "docs" 2026-02-04 01:23:49.686057 | 2026-02-04 01:23:49.967638 | orchestrator | ok: "artifacts" 2026-02-04 01:23:50.252282 | orchestrator | ok: "logs" 2026-02-04 01:23:50.269477 | 2026-02-04 01:23:50.269640 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-04 01:23:50.306764 | 2026-02-04 01:23:50.307107 | TASK [stage-output : Make all log files readable] 2026-02-04 01:23:50.640635 | orchestrator | ok 2026-02-04 01:23:50.650344 | 2026-02-04 01:23:50.650472 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-04 01:23:50.685361 | orchestrator | skipping: Conditional result was False 2026-02-04 01:23:50.699038 | 2026-02-04 01:23:50.699179 | TASK [stage-output : Discover log files for compression] 2026-02-04 01:23:50.724100 | orchestrator | skipping: Conditional result was False 2026-02-04 01:23:50.733919 | 2026-02-04 01:23:50.734063 | LOOP [stage-output : Archive everything from logs] 2026-02-04 01:23:50.780682 | 2026-02-04 01:23:50.780852 | PLAY [Post cleanup play] 2026-02-04 01:23:50.789738 | 2026-02-04 01:23:50.789847 | TASK [Set cloud fact (Zuul deployment)] 2026-02-04 01:23:50.846412 | orchestrator | ok 2026-02-04 01:23:50.859589 | 2026-02-04 01:23:50.859725 | TASK [Set cloud fact (local deployment)] 2026-02-04 01:23:50.884955 | orchestrator | skipping: Conditional result was False 2026-02-04 01:23:50.901293 | 2026-02-04 01:23:50.901461 | TASK [Clean the cloud environment] 2026-02-04 01:23:52.936976 | orchestrator | 2026-02-04 01:23:52 - clean up servers 2026-02-04 01:23:53.726582 | orchestrator | 2026-02-04 01:23:53 - testbed-manager 2026-02-04 01:23:53.812262 | orchestrator | 2026-02-04 01:23:53 - testbed-node-4 2026-02-04 01:23:53.898772 | orchestrator | 2026-02-04 01:23:53 - testbed-node-1 2026-02-04 01:23:53.983008 | orchestrator | 2026-02-04 01:23:53 - testbed-node-5 2026-02-04 01:23:54.077383 | orchestrator | 2026-02-04 01:23:54 - testbed-node-3 2026-02-04 01:23:54.167432 | orchestrator | 2026-02-04 01:23:54 - testbed-node-2 2026-02-04 01:23:54.254879 | orchestrator | 2026-02-04 01:23:54 - testbed-node-0 2026-02-04 01:23:54.349417 | orchestrator | 2026-02-04 01:23:54 - clean up keypairs 2026-02-04 01:23:54.366097 | orchestrator | 2026-02-04 01:23:54 - testbed 2026-02-04 01:23:54.391907 | orchestrator | 2026-02-04 01:23:54 - wait for servers to be gone 2026-02-04 01:24:05.325939 | orchestrator | 2026-02-04 01:24:05 - clean up ports 2026-02-04 01:24:05.521173 | orchestrator | 2026-02-04 01:24:05 - 39a8652a-8cca-4650-8895-61ca52f871b7 2026-02-04 01:24:05.831752 | orchestrator | 2026-02-04 01:24:05 - 41e66fbc-2968-4723-8c20-4cc7d15d1883 2026-02-04 01:24:06.093626 | orchestrator | 2026-02-04 01:24:06 - 60afe6a2-af1d-41c5-a7bf-703a6a19cd40 2026-02-04 01:24:06.700022 | orchestrator | 2026-02-04 01:24:06 - 75e21f95-e620-443c-b9a1-f31a71dc27ad 2026-02-04 01:24:06.921366 | orchestrator | 2026-02-04 01:24:06 - 9ab7d801-f36f-454c-abf8-99a1ea0d68a4 2026-02-04 01:24:07.320260 | orchestrator | 2026-02-04 01:24:07 - c3a46693-3036-4a99-9e1d-352a8c0cb50d 2026-02-04 01:24:07.523427 | orchestrator | 2026-02-04 01:24:07 - eff79e66-f199-430e-b760-e143bf4229be 2026-02-04 01:24:07.792350 | orchestrator | 2026-02-04 01:24:07 - clean up volumes 2026-02-04 01:24:07.909968 | orchestrator | 2026-02-04 01:24:07 - testbed-volume-2-node-base 2026-02-04 01:24:07.948032 | orchestrator | 2026-02-04 01:24:07 - testbed-volume-4-node-base 2026-02-04 01:24:07.996558 | orchestrator | 2026-02-04 01:24:07 - testbed-volume-3-node-base 2026-02-04 01:24:08.043184 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-0-node-base 2026-02-04 01:24:08.081634 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-1-node-base 2026-02-04 01:24:08.128738 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-5-node-base 2026-02-04 01:24:08.174084 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-manager-base 2026-02-04 01:24:08.216152 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-3-node-3 2026-02-04 01:24:08.262559 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-4-node-4 2026-02-04 01:24:08.304773 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-8-node-5 2026-02-04 01:24:08.349932 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-2-node-5 2026-02-04 01:24:08.393455 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-6-node-3 2026-02-04 01:24:08.434746 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-1-node-4 2026-02-04 01:24:08.477182 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-5-node-5 2026-02-04 01:24:08.517850 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-7-node-4 2026-02-04 01:24:08.564261 | orchestrator | 2026-02-04 01:24:08 - testbed-volume-0-node-3 2026-02-04 01:24:08.603252 | orchestrator | 2026-02-04 01:24:08 - disconnect routers 2026-02-04 01:24:08.714086 | orchestrator | 2026-02-04 01:24:08 - testbed 2026-02-04 01:24:09.662545 | orchestrator | 2026-02-04 01:24:09 - clean up subnets 2026-02-04 01:24:09.710245 | orchestrator | 2026-02-04 01:24:09 - subnet-testbed-management 2026-02-04 01:24:09.868514 | orchestrator | 2026-02-04 01:24:09 - clean up networks 2026-02-04 01:24:10.036399 | orchestrator | 2026-02-04 01:24:10 - net-testbed-management 2026-02-04 01:24:10.317861 | orchestrator | 2026-02-04 01:24:10 - clean up security groups 2026-02-04 01:24:10.361755 | orchestrator | 2026-02-04 01:24:10 - testbed-node 2026-02-04 01:24:10.468239 | orchestrator | 2026-02-04 01:24:10 - testbed-management 2026-02-04 01:24:10.594098 | orchestrator | 2026-02-04 01:24:10 - clean up floating ips 2026-02-04 01:24:10.624806 | orchestrator | 2026-02-04 01:24:10 - 81.163.192.33 2026-02-04 01:24:11.004621 | orchestrator | 2026-02-04 01:24:11 - clean up routers 2026-02-04 01:24:11.099931 | orchestrator | 2026-02-04 01:24:11 - testbed 2026-02-04 01:24:12.457767 | orchestrator | ok: Runtime: 0:00:20.808272 2026-02-04 01:24:12.463586 | 2026-02-04 01:24:12.463743 | PLAY RECAP 2026-02-04 01:24:12.463848 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-04 01:24:12.463899 | 2026-02-04 01:24:12.615183 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-04 01:24:12.617701 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-04 01:24:13.387809 | 2026-02-04 01:24:13.387974 | PLAY [Cleanup play] 2026-02-04 01:24:13.404638 | 2026-02-04 01:24:13.404767 | TASK [Set cloud fact (Zuul deployment)] 2026-02-04 01:24:13.464904 | orchestrator | ok 2026-02-04 01:24:13.474222 | 2026-02-04 01:24:13.474370 | TASK [Set cloud fact (local deployment)] 2026-02-04 01:24:13.509476 | orchestrator | skipping: Conditional result was False 2026-02-04 01:24:13.525102 | 2026-02-04 01:24:13.525234 | TASK [Clean the cloud environment] 2026-02-04 01:24:14.777281 | orchestrator | 2026-02-04 01:24:14 - clean up servers 2026-02-04 01:24:15.278460 | orchestrator | 2026-02-04 01:24:15 - clean up keypairs 2026-02-04 01:24:15.299369 | orchestrator | 2026-02-04 01:24:15 - wait for servers to be gone 2026-02-04 01:24:15.350724 | orchestrator | 2026-02-04 01:24:15 - clean up ports 2026-02-04 01:24:15.423310 | orchestrator | 2026-02-04 01:24:15 - clean up volumes 2026-02-04 01:24:15.495622 | orchestrator | 2026-02-04 01:24:15 - disconnect routers 2026-02-04 01:24:15.529117 | orchestrator | 2026-02-04 01:24:15 - clean up subnets 2026-02-04 01:24:15.546945 | orchestrator | 2026-02-04 01:24:15 - clean up networks 2026-02-04 01:24:15.673431 | orchestrator | 2026-02-04 01:24:15 - clean up security groups 2026-02-04 01:24:15.711265 | orchestrator | 2026-02-04 01:24:15 - clean up floating ips 2026-02-04 01:24:15.736838 | orchestrator | 2026-02-04 01:24:15 - clean up routers 2026-02-04 01:24:16.061453 | orchestrator | ok: Runtime: 0:00:01.447625 2026-02-04 01:24:16.065226 | 2026-02-04 01:24:16.065392 | PLAY RECAP 2026-02-04 01:24:16.065519 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-04 01:24:16.065583 | 2026-02-04 01:24:16.198768 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-04 01:24:16.201468 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-04 01:24:16.956091 | 2026-02-04 01:24:16.956246 | PLAY [Base post-fetch] 2026-02-04 01:24:16.971722 | 2026-02-04 01:24:16.971848 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-04 01:24:17.028125 | orchestrator | skipping: Conditional result was False 2026-02-04 01:24:17.043493 | 2026-02-04 01:24:17.043690 | TASK [fetch-output : Set log path for single node] 2026-02-04 01:24:17.083174 | orchestrator | ok 2026-02-04 01:24:17.091756 | 2026-02-04 01:24:17.091920 | LOOP [fetch-output : Ensure local output dirs] 2026-02-04 01:24:17.573438 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/fc9ae95db2ad46a99572c1cde3cf2fd8/work/logs" 2026-02-04 01:24:17.833143 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/fc9ae95db2ad46a99572c1cde3cf2fd8/work/artifacts" 2026-02-04 01:24:18.135454 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/fc9ae95db2ad46a99572c1cde3cf2fd8/work/docs" 2026-02-04 01:24:18.153390 | 2026-02-04 01:24:18.153534 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-04 01:24:19.120585 | orchestrator | changed: .d..t...... ./ 2026-02-04 01:24:19.120911 | orchestrator | changed: All items complete 2026-02-04 01:24:19.120963 | 2026-02-04 01:24:19.869258 | orchestrator | changed: .d..t...... ./ 2026-02-04 01:24:20.619952 | orchestrator | changed: .d..t...... ./ 2026-02-04 01:24:20.647451 | 2026-02-04 01:24:20.647569 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-04 01:24:20.683329 | orchestrator | skipping: Conditional result was False 2026-02-04 01:24:20.686420 | orchestrator | skipping: Conditional result was False 2026-02-04 01:24:20.711078 | 2026-02-04 01:24:20.711184 | PLAY RECAP 2026-02-04 01:24:20.711253 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-04 01:24:20.711288 | 2026-02-04 01:24:20.833417 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-04 01:24:20.834546 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-04 01:24:21.555647 | 2026-02-04 01:24:21.555804 | PLAY [Base post] 2026-02-04 01:24:21.570004 | 2026-02-04 01:24:21.570148 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-04 01:24:22.569826 | orchestrator | changed 2026-02-04 01:24:22.581128 | 2026-02-04 01:24:22.581268 | PLAY RECAP 2026-02-04 01:24:22.581347 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-04 01:24:22.581427 | 2026-02-04 01:24:22.704613 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-04 01:24:22.706217 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-04 01:24:23.534304 | 2026-02-04 01:24:23.534489 | PLAY [Base post-logs] 2026-02-04 01:24:23.548807 | 2026-02-04 01:24:23.548961 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-04 01:24:23.980685 | localhost | changed 2026-02-04 01:24:23.990643 | 2026-02-04 01:24:23.990784 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-04 01:24:24.028926 | localhost | ok 2026-02-04 01:24:24.036711 | 2026-02-04 01:24:24.036890 | TASK [Set zuul-log-path fact] 2026-02-04 01:24:24.065850 | localhost | ok 2026-02-04 01:24:24.080655 | 2026-02-04 01:24:24.080799 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-04 01:24:24.118693 | localhost | ok 2026-02-04 01:24:24.125571 | 2026-02-04 01:24:24.125734 | TASK [upload-logs : Create log directories] 2026-02-04 01:24:24.634802 | localhost | changed 2026-02-04 01:24:24.640605 | 2026-02-04 01:24:24.640766 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-04 01:24:25.133507 | localhost -> localhost | ok: Runtime: 0:00:00.007310 2026-02-04 01:24:25.143106 | 2026-02-04 01:24:25.143327 | TASK [upload-logs : Upload logs to log server] 2026-02-04 01:24:25.718326 | localhost | Output suppressed because no_log was given 2026-02-04 01:24:25.720168 | 2026-02-04 01:24:25.720272 | LOOP [upload-logs : Compress console log and json output] 2026-02-04 01:24:25.766234 | localhost | skipping: Conditional result was False 2026-02-04 01:24:25.774578 | localhost | skipping: Conditional result was False 2026-02-04 01:24:25.790781 | 2026-02-04 01:24:25.791060 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-04 01:24:25.836872 | localhost | skipping: Conditional result was False 2026-02-04 01:24:25.837207 | 2026-02-04 01:24:25.843966 | localhost | skipping: Conditional result was False 2026-02-04 01:24:25.853636 | 2026-02-04 01:24:25.853854 | LOOP [upload-logs : Upload console log and json output]